NGC Catalog
CLASSIC
Welcome Guest
Containers
PipeTuner

PipeTuner

For copy image paths and more information, please view on a desktop device.
Description
PipeTuner is an automatic tuning tool that efficiently explores the parameter space and finds the optimal parameters for the pipelines, which yields the highest KPI on the dataset provided by the user. PipeTuner containers can be pulled here.
Publisher
-
Latest Tag
1.0
Modified
May 14, 2024
Compressed Size
295.38 MB
Multinode Support
No
Multi-Arch Support
No
1.0 (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

Introduction

We have introduced PipeTuner Collection. PipeTuner is an automatic tuning tool that efficiently explores the parameter space and finds the optimal parameters for the pipelines, which yields the highest KPI on the dataset provided by the user.

PipeTuner Container

Container Name Architecture License Type Notes
pipetuner:1.0 x86 NVIDIA_PipeTuner_EULA The PipeTuner container enables automatic tuning for vision pipelines. It should be used with DeepStream or Metropolis Microservices containers following the instructions below.

Getting Started

System Requirements

PipeTuner requires the following components on an x86_64 system:

  • OS Ubuntu 22.04
  • NVIDIA driver 535.104 or 535.161
  • Docker - Setup instructions (need to run without sudo privilege - Instructions)
  • NVIDIA container toolkit - Setup instructions

NGC Setup

Users need to follow below steps to sign in to an NGC account and get an API key.

  1. Visit NGC sign in page, Enter your email address and click Next, or Create an Account.
  2. Choose your organization when prompted for Organization/Team. DeepStream users may use any organization and team; Metropolis Microservice users need to select nv-mdx/mdx-v2-0; Click Sign In.
  3. Generate an API key following the instructions.
  4. Log in to the NGC docker registry (nvcr.io) and enter the following credentials, where YOUR_NGC_API_KEY corresponds to the key you generated from the previous step.
$ docker login nvcr.io
Username: "$oauthtoken"
Password: "YOUR_NGC_API_KEY"
  1. Metropolis Microservice users need to install NGC CLI following the instructions, and set ngc config as below. DeepStream users can skip this step.
$ ngc config set
Enter API key: "YOUR_NGC_API_KEY"
Enter org: nfgnkvuikvjm
Enter team: mdx-v2-0

Sample Data Setup

The sample data consists of a mini-synthetic dataset with eight 1-minute streams and config files for tuning. You can download the sample files pipe-tuner-sample.zip by clicking “Download” from PipeTuner Documentation and Sample Data page.

Once you download the sample file, unzip the file and run setup.sh to finish sample data for either DeepStream or Metropolis Microservices.

$ unzip pipe-tuner-sample.zip
$ cd pipe-tuner-sample/scripts

$ # DeepStream or Metropolis Microservices users should run only one of the following two commands based on their usage 
$ bash setup.sh deepstream            # DeepStream users

$ bash setup.sh metropolis            # Metropolis Microservices users

DeepStream users should see docker images like below.

$ docker images # bash setup.sh deepstream 
REPOSITORY                                              TAG                    
nvcr.io/nvidia/pipetuner                                1.0
nvcr.io/nvidia/deepstream                               7.0-triton-multiarch

Also, model files should be under the ‘models’ folder. They will be mapped into DeepStream containers during tuning.

$ ls ../models
labels.txt  resnet34_peoplenet_int8.etlt  resnet34_peoplenet_int8.txt  resnet50_market1501_aicity156.onnx

Metropolis users should see docker images like below. The ‘models’ folder is empty because default models in mdx-perception container will be used.

$ docker images # bash setup.sh metropolis
REPOSITORY                                              TAG                    
nvcr.io/nvidia/pipetuner                                1.0
nvcr.io/nfgnkvuikvjm/mdx-v2-0/mdx-perception            2.1

The final directory under pipe-tuner-sample is like:

pipe-tuner-sample
├── configs
│   ├── config_CameraMatrix
│   ├── config_GuiTool
│   ├── config_MTMC
│   ├── config_PGIE
│   ├── config_PipeTuner
│   └── config_Tracker
├── data
│   ├── SDG_1min_utils  
│   └── SDG_1min_videos
├── models
├── ngc_download
├── multi-camera-tracking (only for Metropolis Microservice)
└── scripts

Run the Container

The tuning process consists of two steps:

  • Launch the tuning pipelines: PipeTuner runs the pipeline and measures the accuracy metric. Meanwhile it uses one or multiple optimizers to optimize the parameters in the search range against the accuracy metric;
  • Retrieve and visualize the tuning results: Find the highest accuracy and the corresponding optimal parameters for deployment.

Launch the Tuning Pipelines

To run the sample pipelines, enter pipe-tuner-sample/scripts

$ cd pipe-tuner-sample/scripts

launch.sh takes in a PipeTuner config file, automatically launches the containers and starts the tuning process. Usage:

$ bash launch.sh [deepstream image name/id] [config_pipetuner.yml]

Retrieve and Visualize Tuning Results

PipeTuner provides the following features to get the tuned parameters and results.

  • Plot accuracy convergence graph
  • Retrieve the optimal checkpoint All below commands are executed under pipe-tuner-sample/scripts Usage: After launching tuning for some iterations, run:
$ bash result_analysis.sh [output folder] [metric]

Here output folder is the output directory created in the previous step: pipe-tuner-sample/output/PipeTuner configname.yml_output, and metric should be the same as evaluation metric defined in PipeTuner config among MOTA, IDF1 and HOTA.

License

License

Asset Applicable EULA Notes
PipeTuner Container NVIDIA_PipeTuner_EULA A copy of the license is available in the following path inside the container: /pipe-tuner/NVIDIA_PipeTuner_EULA.pdf

NOTE: By pulling, downloading, or using PipeTuner, you accept the terms and conditions of the EULA licenses listed above.

For DeepStream SDK and Metropolis Microservices, please refer to their own licenses.

3rd Party Notice

PipeTuner container uses third-party libraries that are distributed under licenses other than PipeTuner container's own license as below.

3rd Party Notice (Click to expand)

================================================================================

PipeTuner uses TrackEval which is provided under the following terms:

Copyright Jonathon Luiten - MIT License

License text

Copyright (c) 2020 Jonathon Luiten

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

================================================================================

PipeTuner uses py-motmetrics which is provided under the following terms:

Copyright Christoph Heindl, Toka, Jack Valmadre - MIT License

License text

MIT License

Copyright (c) 2017-2020 Christoph Heindl

Copyright (c) 2018 Toka

Copyright (c) 2019-2020 Jack Valmadre

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.