Data Augmentation using Augment

Transfer learning is the process of transferring learned features from one application to another. It is a commonly used training technique where you use a model trained on one task and re-train to use it on a different task.

Train Adapt Optimize (TAO) Toolkit is a simple and easy-to-use Python based AI toolkit for taking purpose-built AI models and customizing them with users' own data.

Learning Objectives

In this notebook, you will learn how to leverage the simplicity and convenience of TAO to augment an object detection dataset using augment task in the Transfer Learning.

Table of Contents

This notebook shows an example use case of data augmentation for an object detection dataset.

  1. Set-up env variables
  2. Install the Launcher
  3. Prepare dataset
    1. Download the dataset
    2. Verify downloaded dataset
  4. Augment the dataset
  5. Visualize augmented results

0. Set up env variables

This project sets up all the environment variables for the input dataset and the augmented output data.

*Note: Please make sure to remove any stray artifacts/files from the $USER_EXPERIMENT_DIR or $DATA_DOWNLOAD_DIR paths as mentioned below, that may have been generated from previous runs.

In [1]:
%env USER_EXPERIMENT_DIR=/workspace/tao-experiments/augment
%env DATA_DOWNLOAD_DIR=/workspace/tao-experiments/data
%env SPECS_DIR=/workspace/examples/augment/specs

# Setting up env variables for cleaner command line commands.
import os

# Set this path if you don't run the notebook from the samples directory.
# %env NOTEBOOK_ROOT=~/tao-samples/augment

# Please define this local project directory that needs to be mapped to the TAO docker session.
# The dataset expected to be present in $LOCAL_PROJECT_DIR/data, while the results for the steps
# in this notebook will be stored at $LOCAL_PROJECT_DIR/augment
# !PLEASE MAKE SURE TO UPDATE THIS PATH!.
os.environ["LOCAL_PROJECT_DIR"] = FIXME

os.environ["LOCAL_DATA_DIR"] = os.path.join(
    os.getenv("LOCAL_PROJECT_DIR", os.getcwd()),
    "data"
)
os.environ["LOCAL_EXPERIMENT_DIR"] = os.path.join(
    os.getenv("LOCAL_PROJECT_DIR", os.getcwd()),
    "augment"
)

# The sample spec files are present in the same path as the downloaded samples.
os.environ["LOCAL_SPECS_DIR"] = os.path.join(
    os.getenv("NOTEBOOK_ROOT", os.getcwd()),
    "specs"
)

# Showing list of specification files.
!ls -rlt $LOCAL_SPECS_DIR

The cell below maps the project directory on your local host to a workspace directory in the TAO docker instance, so that the data and the results are mapped from in and out of the docker. For more information please refer to the launcher instance in the user

In [2]:
# Mapping up the local directories to the TAO docker.
import json
mounts_file = os.path.expanduser("~/.tao_mounts.json")

# Define the dictionary with the mapped drives
drive_map = {
    "Mounts": [
        # Mapping the data directory
        {
            "source": os.environ["LOCAL_PROJECT_DIR"],
            "destination": "/workspace/tao-experiments"
        },
        # Mapping the specs directory.
        {
            "source": os.environ["LOCAL_SPECS_DIR"],
            "destination": os.environ["SPECS_DIR"]
        },
    ]
}

# Writing the mounts file.
with open(mounts_file, "w") as mfile:
    json.dump(drive_map, mfile, indent=4)
In [3]:
!cat ~/.tao_mounts.json

1. Install the TAO launcher

The TAO launcher is a python package distributed as a python wheel listed in the nvidia-pyindex python index. You may install the launcher by executing the following cell.

Please note that TAO Toolkit recommends users to run the TAO launcher in a virtual env with python 3.6.9. You may follow the instruction in this page to set up a python virtual env using the virtualenv and virtualenvwrapper packages. Once you have setup virtualenvwrapper, please set the version of python to be used in the virtual env by using the VIRTUALENVWRAPPER_PYTHON variable. You may do so by running

export VIRTUALENVWRAPPER_PYTHON=/path/to/bin/python3.x

where x >= 6 and <= 8

We recommend performing this step first and then launching the notebook from the virtual environment. In addition to installing TAO python package, please make sure of the following software requirements:

  • python >=3.6.9 < 3.8.x
  • docker-ce > 19.03.5
  • docker-API 1.40
  • nvidia-container-toolkit > 1.3.0-1
  • nvidia-container-runtime > 3.4.0-1
  • nvidia-docker2 > 2.5.0-1
  • nvidia-driver > 455+

Once you have installed the pre-requisites, please log in to the docker registry nvcr.io by following the command below

docker login nvcr.io

You will be triggered to enter a username and password. The username is $oauthtoken and the password is the API key generated from ngc.nvidia.com. Please follow the instructions in the NGC setup guide to generate your own API key.

In [4]:
# SKIP this step IF you have already installed the TAO launcher wheel.
!pip3 install nvidia-pyindex
!pip3 install nvidia-tao
In [5]:
# View the versions of the TAO launcher
!tao info

2. Prepare the dataset

We will be using the kitti object detection dataset for this example. To find more details, please visit http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d. Please download both, the left color images of the object dataset from here and, the training labels for the object dataset from here, and place the zip files in $LOCAL_DATA_DIR

The data will then be extracted to have

  • training images in $LOCAL_DATA_DIR/training/image_2
  • training labels in $LOCAL_DATA_DIR/training/label_2
  • testing images in $LOCAL_DATA_DIR/testing/image_2

You may use this notebook with your own dataset as well. To use this example with your own dataset, please follow the same directory structure as mentioned below.

Note: There are no labels for the testing images, therefore we use it just to visualize inferences for the trained model.

A. Download the dataset

Once you have gotten the download links in your email, please populate them in place of the KITTI_IMAGES_DOWNLOAD_URL and the KITTI_LABELS_DOWNLOAD_URL. This next cell, will download the data and place in $LOCAL_DATA_DIR

In [6]:
import os
!mkdir -p $LOCAL_DATA_DIR
os.environ["URL_IMAGES"]=KITTI_IMAGES_DOWNLOAD_URL
!if [ ! -f $LOCAL_DATA_DIR/data_object_image_2.zip ]; then wget $URL_IMAGES -O $LOCAL_DATA_DIR/data_object_image_2.zip; else echo "image archive already downloaded"; fi 
os.environ["URL_LABELS"]=KITTI_LABELS_DOWNLOAD_URL
!if [ ! -f $LOCAL_DATA_DIR/data_object_label_2.zip ]; then wget $URL_LABELS -O $LOCAL_DATA_DIR/data_object_label_2.zip; else \ echo "label archive already downloaded"; fi 

B. Verify downloaded dataset

In [7]:
# Check the dataset is present
!if [ ! -f $LOCAL_DATA_DIR/data_object_image_2.zip ]; then echo 'Image zip file not found, please download.'; else echo 'Found Image zip file.';fi
!if [ ! -f $LOCAL_DATA_DIR/data_object_label_2.zip ]; then echo 'Label zip file not found, please download.'; else echo 'Found Labels zip file.';fi
In [8]:
# This may take a while: verify integrity of zip files 
!sha256sum $LOCAL_DATA_DIR/data_object_image_2.zip | cut -d ' ' -f 1 | grep -xq '^351c5a2aa0cd9238b50174a3a62b846bc5855da256b82a196431d60ff8d43617$' ; \
if test $? -eq 0; then echo "images OK"; else echo "images corrupt, redownload!" && rm -f $LOCAL_DATA_DIR/data_object_image_2.zip; fi 
!sha256sum $LOCAL_DATA_DIR/data_object_label_2.zip | cut -d ' ' -f 1 | grep -xq '^4efc76220d867e1c31bb980bbf8cbc02599f02a9cb4350effa98dbb04aaed880$' ; \
if test $? -eq 0; then echo "labels OK"; else echo "labels corrupt, redownload!" && rm -f $LOCAL_DATA_DIR/data_object_label_2.zip; fi 
In [9]:
# unpack downloaded datasets to $DATA_DOWNLOAD_DIR.
# The training images will be under $DATA_DOWNLOAD_DIR/training/image_2 and 
# labels will be under $DATA_DOWNLOAD_DIR/training/label_2.
# The testing images will be under $DATA_DOWNLOAD_DIR/testing/image_2.
!unzip -u $LOCAL_DATA_DIR/data_object_image_2.zip -d $LOCAL_DATA_DIR
!unzip -u $LOCAL_DATA_DIR/data_object_label_2.zip -d $LOCAL_DATA_DIR
In [10]:
# verify
import os

DATA_DIR = os.environ.get('LOCAL_DATA_DIR')
num_training_images = len(os.listdir(os.path.join(DATA_DIR, "training/image_2")))
num_training_labels = len(os.listdir(os.path.join(DATA_DIR, "training/label_2")))
num_testing_images = len(os.listdir(os.path.join(DATA_DIR, "testing/image_2")))
print("Number of images in the trainval set. {}".format(num_training_images))
print("Number of labels in the trainval set. {}".format(num_training_labels))
print("Number of images in the test set. {}".format(num_testing_images))
In [11]:
# Sample kitti label.
!cat $LOCAL_DATA_DIR/training/label_2/000110.txt

Augment the dataset

This cell ingests the downloaded KITTI dataset and augments the dataset. For this use case we add

  1. a spatial rotation of 10 degrees.
  2. a color space hue rotation of 5 degrees.

Note: The offline augmentation graph has a very small GPU footprint. Therefore, to maximize GPU utilization you may use a larger batch size by using the --batch_size option on the command line. By default this parameter is set to 4. The number of images that can be fit in a batch is governed by the memory available in your GPU.

In [12]:
!cat $LOCAL_SPECS_DIR/default_spec.txt
In [13]:
!tao augment -a $SPECS_DIR/default_spec.txt \
             -o $USER_EXPERIMENT_DIR/augmented_dataset \
             -d $DATA_DOWNLOAD_DIR/training

Visualize augmented results <a class="anchor" id="head-"4>

Now that the dataset has been augmented, it is worthwhile to render the augmented images and labels. The outputs of augment are generated in the following paths:

  • images: $LOCAL_EXPERIMENT_DIR/augmented_dataset/image_2
  • labels: $LOCAL_EXPERIMENT_DIR/augmented_dataset/label_2

If you would like to visualize images with overlain bounding boxes, then please run the cell above with the -v flag enabled. This generated bounding box rendered outputs at

  • annotated images: $LOCAL_EXPERIMENT_DIR/augmented_dataset/images_annotated
In [14]:
# Simple grid visualizer
!pip3 install matplotlib==3.3.3
%matplotlib inline
import matplotlib.pyplot as plt
import os
from math import ceil
valid_image_ext = ['.jpg', '.png', '.jpeg', '.ppm']

def visualize_images(image_dir, num_cols=4, num_images=10):
    output_path = os.path.join(os.environ['LOCAL_EXPERIMENT_DIR'], image_dir)
    num_rows = int(ceil(float(num_images) / float(num_cols)))
    f, axarr = plt.subplots(num_rows, num_cols, figsize=[80,30])
    f.tight_layout()
    a = [os.path.join(output_path, image) for image in os.listdir(output_path) 
         if os.path.splitext(image)[1].lower() in valid_image_ext]
    for idx, img_path in enumerate(a[:num_images]):
        col_id = idx % num_cols
        row_id = idx // num_cols
        img = plt.imread(img_path)
        axarr[row_id, col_id].imshow(img) 
In [15]:
# Visualizing the first 12 images.
# If you would like to view sample annotated images, then please re-run the augment command with the -v flag
# and update the output path below to augmented_dataset/images/annotated
OUTPUT_PATH = 'augmented_dataset/image_2' # relative path from $USER_EXPERIMENT_DIR.
COLS = 4 # number of columns in the visualizer grid.
IMAGES = 12 # number of images to visualize.

visualize_images(OUTPUT_PATH, num_cols=COLS, num_images=IMAGES)