Pipelines for AI services typically include a large number of parameters for inference. To get the best accuracy of such pipelines for a particular use case, a tuning process is essential, which requires an exploration of the parameter space. Manual tuning of such parameters requires in-depth knowledge of all the modules in the pipeline and it’s simply not feasible when the parameter space is large and high-dimensional, even with datasets and ground-truth labels that allow quantitative analysis on the accuracy of the pipeline.
PipeTuner is a tool that efficiently explores the parameter space and automatically finds the optimal parameters for the pipelines, which yields the highest KPI on the dataset provided by the user. The user is not required to have technical knowledge on the pipeline and its parameters. PipeTuner has been adopted in many Nvidia's AI products and services, and significantly improved their accuracy, such as DeepStream state-of-the-art multi-object tracker.
This document is a complete user guide for PipeTuner. Specifically, PipeTuner should be used with other Nvidia products, such as DeepStream SDK and Metropolis as below:
Typical DeepStream pipelines use a detector (PGIE) and multi-object tracker (MOT) to perform single camera tracking for each stream. PipeTuner optimizes the detector and MOT parameters to achieve optimal single camera tracking accuracy.
Metropolis MTMC uses DeepStream as a perception module to perform single camera tracking for each stream, and then performs MTMC analytics. PipeTuner can optimize MTMC parameters only, or performs an end-to-end (E2E) tuning, which includes all the DeepStream and MTMC parameters to achieve optimal MTMC accuracy.
Sample config files with different detectors and tracking algorithms are provided on NGC for users to understand and set up the PipeTuner workflow. Then users can customize their pipelines and datasets for their own use cases.
This collection contains below resources for setup and use PipeTuner:
Key features in PipeTuner 1.0 release:
Item | Documentation |
---|---|
Documentation | PipeTuner User Guide |
Asset | Applicable EULA | Notes |
---|---|---|
PipeTuner Container | NVIDIA_PipeTuner_EULA | A copy of the license is available in the following path inside the container: /pipe-tuner/NVIDIA_PipeTuner_EULA.pdf |
NOTE: By pulling, downloading, or using PipeTuner, you accept the terms and conditions of the EULA licenses listed above.
For DeepStream SDK and Metropolis Microservices, please refer to their own licenses.
NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.