NGC | Catalog

efficientdet

Object Detection using TAO EfficientDet

Transfer learning is the process of transferring learned features from one application to another. It is a commonly used training technique where you use a model trained on one task and re-train to use it on a different task.

Train Adapt Optimize (TAO) Toolkit is a simple and easy-to-use Python based AI toolkit for taking purpose-built AI models and customizing them with users' own data.

Learning Objectives

In this notebook, you will learn how to leverage the simplicity and convenience of TAO to:

  • Take a pretrained model and train an EfficientDet-D0 model on COCO dataset
  • Evaluate the trained model
  • Run pruning and finetuning with the trained model
  • Run inference with the trained model and visualize the result
  • Export the trained model to a .etlt file for deployment to DeepStream
  • Run inference on the exported .etlt model to verify deployment using TensorRT

Table of Contents

This notebook shows an example use case for instance segmentation using the Train Adapt Optimize (TAO) Toolkit.

  1. Set up env variables and map drives
  2. Installing the TAO Launcher
  3. Prepare dataset and pre-trained model
  4. Provide training specification
  5. Run TAO training
  6. Evaluate trained models
  7. Prune trained model
  8. Retrain pruned models
  9. Evaluate retrained model
  10. Visualize inferences
  11. Deploy
  12. Verify the deployed model

0. Set up env variables and map drives

When using the purpose-built pretrained models from NGC, please make sure to set the $KEY environment variable to the key as mentioned in the model overview. Failing to do so, can lead to errors when trying to load them as pretrained models.

The following notebook requires the user to set an env variable called the $LOCAL_PROJECT_DIR as the path to the users workspace. Please note that the dataset to run this notebook is expected to reside in the $LOCAL_PROJECT_DIR/data, while the TAO experiment generated collaterals will be output to $LOCAL_PROJECT_DIR/efficientdet. More information on how to set up the dataset and the supported steps in the TAO workflow are provided in the subsequent cells.

Note: Please make sure to remove any stray artifacts/files from the $USER_EXPERIMENT_DIR or $DATA_DOWNLOAD_DIR paths as mentioned below, that may have been generated from previous experiments. Having checkpoint files etc may interfere with creating a training graph for a new experiment.

Note: This notebook currently is by default set up to run training using 1 GPU. To use more GPU's please update the env variable $NUM_GPUS accordingly

In [1]:
# Setting up env variables for cleaner command line commands.
import os

%env KEY=nvidia_tlt
%env NUM_GPUS=1
%env USER_EXPERIMENT_DIR=/workspace/tao-experiments/efficientdet
%env DATA_DOWNLOAD_DIR=/workspace/tao-experiments/data

# Set this path if you don't run the notebook from the samples directory.
# %env NOTEBOOK_ROOT=~/tao-samples/efficientdet

# Please define this local project directory that needs to be mapped to the TAO docker session.
# The dataset expected to be present in $LOCAL_PROJECT_DIR/data, while the results for the steps
# in this notebook will be stored at $LOCAL_PROJECT_DIR/efficientdet
# !PLEASE MAKE SURE TO UPDATE THIS PATH!.
%env LOCAL_PROJECT_DIR=/workspace/tao-experiments/

os.environ["LOCAL_DATA_DIR"] = os.path.join(
    os.getenv("LOCAL_PROJECT_DIR", os.getcwd()),
    "data"
)
os.environ["LOCAL_EXPERIMENT_DIR"] = os.path.join(
    os.getenv("LOCAL_PROJECT_DIR", os.getcwd()),
    "efficientdet"
)

# The sample spec files are present in the same path as the downloaded samples.
os.environ["LOCAL_SPECS_DIR"] = os.path.join(
    os.getenv("NOTEBOOK_ROOT", os.getcwd()),
    "specs"
)
%env SPECS_DIR=/workspace/tao-experiments/efficientdet/specs

# Showing list of specification files.
!ls -rlt $LOCAL_SPECS_DIR

The cell below maps the project directory on your local host to a workspace directory in the TAO docker instance, so that the data and the results are mapped from in and out of the docker. For more information please refer to the launcher instance in the user guide.

When running this cell on AWS, update the drive_map entry with the dictionary defined below, so that you don't have permission issues when writing data into folders created by the TAO docker.

drive_map = {
    "Mounts": [
            # Mapping the data directory
            {
                "source": os.environ["LOCAL_PROJECT_DIR"],
                "destination": "/workspace/tao-experiments"
            },
            # Mapping the specs directory.
            {
                "source": os.environ["LOCAL_SPECS_DIR"],
                "destination": os.environ["SPECS_DIR"]
            },
        ],
    "DockerOptions": {
        "user": "{}:{}".format(os.getuid(), os.getgid()),
        "network": "host"
    }
}
In [2]:
# Mapping up the local directories to the TAO docker.
import json
mounts_file = os.path.expanduser("~/.tao_mounts.json")

# Define the dictionary with the mapped drives
drive_map = {
    "Mounts": [
        # Mapping the data directory
        {
            "source": os.environ["LOCAL_PROJECT_DIR"],
            "destination": "/workspace/tao-experiments"
        },
        # Mapping the specs directory.
        {
            "source": os.environ["LOCAL_SPECS_DIR"],
            "destination": os.environ["SPECS_DIR"]
        },
    ]
}

# Writing the mounts file.
with open(mounts_file, "w") as mfile:
    json.dump(drive_map, mfile, indent=4)
In [3]:
!cat ~/.tao_mounts.json

1. Installing the TAO launcher

The TAO launcher is a python package distributed as a python wheel listed in PyPI. You may install the launcher by executing the following cell.

Please note that TAO Toolkit recommends users to run the TAO launcher in a virtual env with python 3.6.9. You may follow the instruction in this page to set up a python virtual env using the virtualenv and virtualenvwrapper packages. Once you have setup virtualenvwrapper, please set the version of python to be used in the virtual env by using the VIRTUALENVWRAPPER_PYTHON variable. You may do so by running

export VIRTUALENVWRAPPER_PYTHON=/path/to/bin/python3.x

where x >= 6 and <= 8

We recommend performing this step first and then launching the notebook from the virtual environment. In addition to installing TAO python package, please make sure of the following software requirements:

  • python >=3.6.9 < 3.8.x
  • docker-ce > 19.03.5
  • docker-API 1.40
  • nvidia-container-toolkit > 1.3.0-1
  • nvidia-container-runtime > 3.4.0-1
  • nvidia-docker2 > 2.5.0-1
  • nvidia-driver > 455+

Once you have installed the pre-requisites, please log in to the docker registry nvcr.io by following the command below

docker login nvcr.io

You will be triggered to enter a username and password. The username is $oauthtoken and the password is the API key generated from ngc.nvidia.com. Please follow the instructions in the NGC setup guide to generate your own API key.

After setting up your virtual environment with the above requirements, install TAO pip package.

In [4]:
# SKIP this step IF you have already installed the TAO launcher.
!pip3 install nvidia-tao
In [5]:
# View the versions of the TAO launcher
!tao info

2. Prepare dataset and pre-trained model

We will be using the COCO dataset for the tutorial. The following script will download COCO dataset automatically and convert it to TFRecords.

In [6]:
# Create local dir
!mkdir -p $LOCAL_DATA_DIR
!mkdir -p $LOCAL_EXPERIMENT_DIR
# Download and preprocess data
!tao efficientdet run bash $SPECS_DIR/download_coco.sh $DATA_DOWNLOAD_DIR
In [7]:
# convert training data to TFRecords
!tao efficientdet dataset_convert -i $DATA_DOWNLOAD_DIR/raw-data/train2017 \
                                  -a $DATA_DOWNLOAD_DIR/raw-data/annotations/instances_train2017.json \
                                  -o $DATA_DOWNLOAD_DIR --include_masks -t train -s 256
In [8]:
# convert validation data to TFRecords
!tao efficientdet dataset_convert -i $DATA_DOWNLOAD_DIR/raw-data/val2017 \
                                  -a $DATA_DOWNLOAD_DIR/raw-data/annotations/instances_val2017.json \
                                  -o $DATA_DOWNLOAD_DIR --include_masks -t val -s 32

Note that the dataset conversion scripts provided in specs are intended for the standard COCO dataset. If your data doesn't have caption groundtruth or test set, you can modify download_and_preprocess_coco.sh and create_coco_tf_record.py by commenting out corresponding variables.

In [9]:
# verify
!ls -l $LOCAL_DATA_DIR

Download pretrained model from NGC

We will use NGC CLI to get the pre-trained models. For more details, go to ngc.nvidia.com and click the SETUP on the navigation bar.

In [10]:
# Installing NGC CLI on the local machine.
## Download and install
%env CLI=ngccli_cat_linux.zip
!mkdir -p $LOCAL_PROJECT_DIR/ngccli

# Remove any previously existing CLI installations
!rm -rf $LOCAL_PROJECT_DIR/ngccli/*
!wget "https://ngc.nvidia.com/downloads/$CLI" -P $LOCAL_PROJECT_DIR/ngccli
!unzip -u "$LOCAL_PROJECT_DIR/ngccli/$CLI" -d $LOCAL_PROJECT_DIR/ngccli/
!rm -f $LOCAL_PROJECT_DIR/ngccli/*.zip 
os.environ["PATH"]="{}/ngccli:{}".format(os.getenv("LOCAL_PROJECT_DIR", ""), os.getenv("PATH", ""))
In [11]:
!ngc registry model list nvstaging/tao/pretrained_efficientdet:efficientnet_b0*
In [12]:
# Pull pretrained model from NGC
!ngc registry model download-version nvstaging/tao/pretrained_efficientdet:efficientnet_b0 --dest $LOCAL_EXPERIMENT_DIR
In [13]:
print("Check that model is downloaded into dir.")
!ls -l $LOCAL_EXPERIMENT_DIR/pretrained_efficientdet_vefficientnet_b0

3. Provide training specification

  • Tfrecords for the train datasets
    • In order to use the newly generated tfrecords, update the dataset_config parameter in the spec file at $SPECS_DIR/efficientdet_d0_train.txt
  • Note that the learning rate in the spec file is set for 1 GPU training. If you have N gpus, you should multiply LR by N.
  • "num_examples_per_epoch" should be set to the total number of images in the dataset divided by the number of GPUs. For example, if you train COCO with 8GPUs, you can set num_examples_per_epoch=14700
  • Pre-trained models
  • Augmentation parameters for on the fly data augmentation
  • Other training (hyper-)parameters such as batch size, number of epochs, learning rate etc.
  • Note that the sample spec is not meant to produce SOTA accuracy on COCO. To reproduce SOTA, you might want to use TAO to train an ImageNet model first and change the total_steps to 100K or above.
In [14]:
!cat $LOCAL_SPECS_DIR/efficientdet_d0_train.txt

4. Train an Efficientdet model

  • Provide the sample spec file and the output directory location for models
  • Evaluation uses COCO metrics. For more info, please refer to: https://cocodataset.org/#detection-eval
  • WARNING: training will take several hours or one day to complete
In [15]:
!mkdir -p $LOCAL_EXPERIMENT_DIR/experiment_dir_unpruned
In [16]:
print("For multi-GPU, change --gpus based on your machine.")
!tao efficientdet train -e $SPECS_DIR/efficientdet_d0_train.txt \
                        -d $USER_EXPERIMENT_DIR/experiment_dir_unpruned\
                        -k $KEY \
                        --gpus $NUM_GPUS
In [17]:
print("To resume training from a checkpoint, simply run the same training script. It will pick up from where it's left.")
# !tao efficientdet train -e $SPECS_DIR/efficientdet_d0_train.txt \
#                        -d $USER_EXPERIMENT_DIR/experiment_dir_unpruned\
#                        -k $KEY \
#                        --gpus $NUM_GPUS
In [18]:
print('Model for each epoch:')
print('---------------------')
!ls -ltrh $LOCAL_EXPERIMENT_DIR/experiment_dir_unpruned/

5. Evaluate trained models

In [19]:
# get the last step of saved checkpoints
last_step=0
for f in os.listdir(os.path.join(os.environ["LOCAL_EXPERIMENT_DIR"],'experiment_dir_unpruned')):
    if f.startswith('model.step'):
        step = int(f.split('.')[1].split('-')[1])
        if step > last_step:
            last_step = step
In [20]:
# You can set NUM_STEP to the step corresponding to any saved checkpoint
%env NUM_STEP={last_step}
In [21]:
!tao efficientdet evaluate -e $SPECS_DIR/efficientdet_d0_train.txt \
                           -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/model.step-$NUM_STEP.tlt \
                           -k $KEY

6. Prune

  • Specify pre-trained model
  • Equalization criterion
  • Threshold for pruning.
  • A key to save and load the model
  • Output directory to store the model

Usually, you just need to adjust -pth (threshold) for accuracy and model size trade off. Higher pth gives you smaller model (and thus higher inference speed) but worse accuracy. The threshold value depends on the dataset and the model. 0.4 in the block below is just a start point. If the retrain accuracy is good, you can increase this value to get smaller models. Otherwise, lower this value to get better accuracy.

In [22]:
# Create an output directory to save the pruned model.
!mkdir -p $LOCAL_EXPERIMENT_DIR/experiment_dir_pruned
In [23]:
!tao efficientdet prune -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/model.step-$NUM_STEP.tlt \
                        -o $USER_EXPERIMENT_DIR/experiment_dir_pruned \
                        -pth 0.7 \
                        -k $KEY
In [24]:
!ls -l $LOCAL_EXPERIMENT_DIR/experiment_dir_pruned

Note that you should retrain the pruned model first, as it cannot be directly used for evaluation or inference.

7. Retrain pruned models

  • Model needs to be re-trained to bring back accuracy after pruning
  • Specify re-training specification
  • WARNING: training will take several hours or one day to complete
In [25]:
!cat $LOCAL_SPECS_DIR/efficientdet_d0_retrain.txt
In [26]:
!mkdir -p $LOCAL_EXPERIMENT_DIR/experiment_dir_retrain
In [27]:
!tao efficientdet train -e $SPECS_DIR/efficientdet_d0_retrain.txt \
                        -d $USER_EXPERIMENT_DIR/experiment_dir_retrain\
                        -k $KEY \
                        --gpus $NUM_GPUS

8. Evaluate retrained model

In [28]:
!tao efficientdet evaluate -e $SPECS_DIR/efficientdet_d0_retrain.txt \
                           -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/model.step-$NUM_STEP.tlt \
                           -k $KEY

9. Visualize inferences

In this section, we run the infer tool to generate inferences on the trained models and visualize the results. The infer tool produces annotated image outputs.

In [29]:
# Copy some test images
!mkdir -p $LOCAL_DATA_DIR/test_samples
!cp $LOCAL_DATA_DIR/raw-data/test2017/0000000000* $LOCAL_DATA_DIR/test_samples
In [30]:
# Running inference for detection on n images
!tao efficientdet inference -i $DATA_DOWNLOAD_DIR/test_samples \
                            -o $USER_EXPERIMENT_DIR/annotated_images \
                            -e $SPECS_DIR/efficientdet_d0_retrain.txt \
                            -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/model.step-$NUM_STEP.tlt \
                            --label_map $SPECS_DIR/coco_labels.txt \
                            -k $KEY
In [31]:
# Simple grid visualizer
!pip3 install matplotlib==3.3.3
import matplotlib.pyplot as plt
import os
from math import ceil
valid_image_ext = ['.jpg']

def visualize_images(image_dir, num_cols=4, num_images=10):
    output_path = os.path.join(os.environ['LOCAL_EXPERIMENT_DIR'], image_dir)
    num_rows = int(ceil(float(num_images) / float(num_cols)))
    f, axarr = plt.subplots(num_rows, num_cols, figsize=[80,30])
    f.tight_layout()
    a = [os.path.join(output_path, image) for image in os.listdir(output_path) 
         if os.path.splitext(image)[1].lower() in valid_image_ext]
    for idx, img_path in enumerate(a[:num_images]):
        col_id = idx % num_cols
        row_id = idx // num_cols
        img = plt.imread(img_path)
        axarr[row_id, col_id].imshow(img) 
In [32]:
# Visualizing the sample images.
OUTPUT_PATH = 'annotated_images' # relative path from $USER_EXPERIMENT_DIR.
COLS = 2 # number of columns in the visualizer grid.
IMAGES = 4 # number of images to visualize.

visualize_images(OUTPUT_PATH, num_cols=COLS, num_images=IMAGES)

10. Deploy!

In [33]:
# Export in FP32 mode. 
!mkdir -p $LOCAL_EXPERIMENT_DIR/export
!tao efficientdet export -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/model.step-$NUM_STEP.tlt \
                         -o $USER_EXPERIMENT_DIR/experiment_dir_retrain/model.step-$NUM_STEP.etlt \
                         -k $KEY \
                         -e $SPECS_DIR/efficientdet_d0_retrain.txt \
                         --data_type fp32 \
                         --engine_file $USER_EXPERIMENT_DIR/export/model.step-$NUM_STEP.engine
In [34]:
# Export in INT8 mode. 
!mkdir -p $LOCAL_EXPERIMENT_DIR/export_int8
# Remove existing etlt file
!rm -f $LOCAL_EXPERIMENT_DIR/experiment_dir_retrain/model.step-$NUM_STEP.etlt
!tao efficientdet export -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/model.step-$NUM_STEP.tlt \
                         -o $USER_EXPERIMENT_DIR/experiment_dir_retrain/model.step-$NUM_STEP.etlt \
                         -k $KEY \
                         -e $SPECS_DIR/efficientdet_d0_retrain.txt \
                         --batch_size 8 \
                         --data_type int8 \
                         --cal_image_dir $DATA_DOWNLOAD_DIR/raw-data/val2017 \
                         --batches 10 \
                         --max_batch_size 1 \
                         --cal_cache_file $USER_EXPERIMENT_DIR/export/efficientdet_d0.cal
In [35]:
# Check if etlt model is correctly saved.
!ls -l $LOCAL_EXPERIMENT_DIR/experiment_dir_retrain/

Verify engine generation using the tao-converter utility included with the docker.

The tao-converter produces optimized tensorrt engines for the platform that it resides on. Therefore, to get maximum performance, please instantiate this docker and execute the tao-converter command, with the exported .etlt file and calibration cache (for int8 mode) on your target device. The tao-converter utility included in this docker only works for x86 devices, with discrete NVIDIA GPU's.

For the jetson devices, please download the tao-converter for jetson from the dev zone link here.

If you choose to integrate your model into deepstream directly, you may do so by simply copying the exported .etlt file along with the calibration cache to the target device and updating the spec file that configures the gst-nvinfer element to point to this newly exported model. Please refer to deepstream dev guide for more details.

In [36]:
print('Exported model:')
print('------------')
!ls -lth $LOCAL_EXPERIMENT_DIR/export
In [37]:
# Convert to TensorRT engine(FP16).
!tao converter -k $KEY  \
                -p image_arrays:0,1x512x512x3,8x512x512x3,16x512x512x3 \
                -e $USER_EXPERIMENT_DIR/export/trt.fp16.engine \
                -t fp16 \
                $USER_EXPERIMENT_DIR/experiment_dir_retrain/model.step-$NUM_STEP.etlt
In [38]:
# Convert to TensorRT engine(INT8).
!tao converter -k $KEY  \
               -c $USER_EXPERIMENT_DIR/export/efficientdet_d0.cal \
               -p image_arrays:0,1x512x512x3,8x512x512x3,16x512x512x3 \
               -e $USER_EXPERIMENT_DIR/export/trt.int8.engine \
               -t int8 \
               -b 8 \
               $USER_EXPERIMENT_DIR/experiment_dir_retrain/model.step-$NUM_STEP.etlt
In [39]:
print('Exported engine:')
print('------------')
!ls -lh $LOCAL_EXPERIMENT_DIR/export/

11. Verify the deployed model

Verify the converted engine by visualizing TensorRT inferences.

In [40]:
# Running inference for detection on a dir of images
!tao efficientdet inference -i $DATA_DOWNLOAD_DIR/test_samples \
                         -o $USER_EXPERIMENT_DIR/trt_annotated_images \
                         -e $SPECS_DIR/efficientdet_d0_retrain.txt \
                         -m $USER_EXPERIMENT_DIR/export/model.step-$NUM_STEP.engine \
                         -l $USER_EXPERIMENT_DIR/trt_annotated_labels \
                         --label_map $SPECS_DIR/coco_labels.txt
In [41]:
!ls -l $LOCAL_EXPERIMENT_DIR/trt_annotated_images