Fiducial Points Estimation using TAO FPENet

Transfer learning is the process of transferring learned features from one application to another. It is a commonly used training technique where you use a model trained on one task and re-train to use it on a different task.

Train Adapt Optimize (TAO) Toolkit is a simple and easy-to-use Python based AI toolkit for taking purpose-built AI models and customizing them with users' own data.

Learning Objectives

In this notebook, you will learn how to leverage the simplicity and convenience of TAO to:

  • Take a pretrained model and train a FPENet model on AFW dataset
  • Run Inference on the trained model
  • Export the retrained model to a .etlt file for deployment to DeepStream SDK

Table of Contents

This notebook shows an example of Fiducial Points Estimation using Train Adapt Optimize (TAO) Toolkit.

  1. Set up env variables, map drives, and install dependencies
  2. Install the TAO Launcher
  3. Prepare dataset and pre-trained model
    2.1 Verify downloaded dataset
    2.2 Download pre-trained model
  4. Generate tfrecords from labels in json format
  5. Provide training specification
  6. Run TAO training
  7. Evaluate trained models
  8. Run inference for a set of images
  9. Deploy

0. Set up env variables, map drives, and install dependencies

When using the purpose-built pretrained models from NGC, please make sure to set the $KEY environment variable to the key as mentioned in the model overview. Failing to do so, can lead to errors when trying to load them as pretrained models.

The following notebook requires the user to set an env variable called the $LOCAL_PROJECT_DIR as the path to the users' workspace. Please note that the dataset to run this notebook is expected to reside in the $LOCAL_PROJECT_DIR/fpenet/data, while the TAO experiment generated collaterals will be output to $LOCAL_PROJECT_DIR/fpenet. More information on how to set up the dataset and the supported steps in the TAO workflow are provided in the subsequent cells.

Note: This notebook currently is by default set up to run training using 1 GPU. To use more GPU's please update the env variable $NUM_GPUS accordingly

In [1]:
# Setting up env variables for cleaner command-line commands.
import os

%env KEY=nvidia_tlt
%env NUM_GPUS=1
%env USER_EXPERIMENT_DIR=/workspace/tao-experiments/fpenet
%env DATA_DIR=/workspace/tao-experiments/fpenet/data

# The number of keypoints can be chosen from [10, 80] for this notebook
%env NUM_KEYPOINTS=80

# Set this path if you don't run the notebook from the samples directory.
# %env NOTEBOOK_ROOT=~/tao-samples/fpenet

# Please define this local project directory that needs to be mapped to the TAO docker session.
# !PLEASE MAKE SURE TO UPDATE THIS PATH!.
%env LOCAL_PROJECT_DIR=/path/to/local/experiments

# $SAMPLES_DIR is the path to the sample notebook folder and the dependency folder
# $SAMPLES_DIR/deps should exist for dependency installation
%env SAMPLES_DIR=/path/to/local/samples_dir

os.environ["LOCAL_DATA_DIR"] = os.path.join(
    os.getenv("LOCAL_PROJECT_DIR", os.getcwd()),
    "fpenet/data"
)
os.environ["LOCAL_EXPERIMENT_DIR"] = os.path.join(
    os.getenv("LOCAL_PROJECT_DIR", os.getcwd()),
    "fpenet"
)

# The sample spec files are present in the same path as the downloaded samples.
os.environ["LOCAL_SPECS_DIR"] = os.path.join(
    os.getenv("NOTEBOOK_ROOT", os.getcwd()),
    "specs"
)
%env SPECS_DIR=/workspace/tao-experiments/fpenet/specs
%env PROJECT_DIR=/workspace/tao-experiments

# Showing list of specification files.
!ls -rlt $LOCAL_SPECS_DIR

The cell below maps the project directory on your local host to a workspace directory in the TAO docker instance, so that the data and the results are mapped from in and out of the docker. For more information please refer to the launcher instance in the user guide.

When running this cell on AWS, update the drive_map entry with the dictionary defined below, so that you don't have permission issues when writing data into folders created by the TAO docker.

drive_map = {
    "Mounts": [
            # Mapping the data directory
            {
                "source": os.environ["LOCAL_PROJECT_DIR"],
                "destination": "/workspace/tao-experiments"
            },
            # Mapping the specs directory.
            {
                "source": os.environ["LOCAL_SPECS_DIR"],
                "destination": os.environ["SPECS_DIR"]
            },
            # Mapping data
            {
                "source": os.environ["LOCAL_DATA_DIR"],
                "destination": os.environ["DATA_DIR"]
            },
        ],
    "DockerOptions": {
        "user": "{}:{}".format(os.getuid(), os.getgid())
    }
}
In [2]:
# Mapping up the local directories to the TAO docker.
import json
mounts_file = os.path.expanduser("~/.tao_mounts.json")

# Define the dictionary with the mapped drives
drive_map = {
    "Mounts": [
        # Mapping the data directory
        {
            "source": os.environ["LOCAL_PROJECT_DIR"],
            "destination": os.environ["PROJECT_DIR"]
        },
        # Mapping the specs directory.
        {
            "source": os.environ["LOCAL_SPECS_DIR"],
            "destination": os.environ["SPECS_DIR"]
        },
        # Mapping data
        {
            "source": os.environ["LOCAL_DATA_DIR"],
            "destination": os.environ["DATA_DIR"]
        },
    ]
}

# Writing the mounts file.
with open(mounts_file, "w") as mfile:
    json.dump(drive_map, mfile, indent=4)
In [3]:
!cat ~/.tao_mounts.json
In [4]:
# Install requirement
!pip3 install -r $SAMPLES_DIR/deps/requirements-pip.txt

1. Install the TAO launcher

The TAO launcher is a python package distributed as a python wheel listed in PyPI. You may install the launcher by executing the following cell.

Please note that TAO Toolkit recommends users run the TAO launcher in a virtual env with python 3.6.9. You may follow the instruction on this page to set up a python virtual env using the virtualenv and virtualenvwrapper packages. Once you have set up virtualenvwrapper, please set the version of python to be used in the virtual env by using the VIRTUALENVWRAPPER_PYTHON variable. You may do so by running

export VIRTUALENVWRAPPER_PYTHON=/path/to/bin/python3.x

where x >= 6 and <= 8

We recommend performing this step first and then launching the notebook from the virtual environment. In addition to installing TAO python package, please make sure of the following software requirements:

  • python >=3.6.9 < 3.8.x
  • docker-ce > 19.03.5
  • docker-API 1.40
  • nvidia-container-toolkit > 1.3.0-1
  • nvidia-container-runtime > 3.4.0-1
  • nvidia-docker2 > 2.5.0-1
  • nvidia-driver > 455+

Once you have installed the pre-requisites, please log in to the docker registry nvcr.io by following the command below

docker login nvcr.io

You will be triggered to enter a username and password. The username is $oauthtoken and the password is the API key generated from ngc.nvidia.com. Please follow the instructions in the NGC setup guide to generate your own API key.

In [5]:
# Skip this step if you have already installed the TAO launcher.
!pip3 install nvidia-tao
In [6]:
# View the versions of the TAO launcher
!tao info

2. Prepare dataset and pre-trained model

Download public dataset.

Please download and unzip the AFW dataset to $LOCAL_EXPERIMENT_DIR directory.

https://ibug.doc.ic.ac.uk/download/annotations/afw.zip/

A. Download and Verify dataset

In [7]:
# Check the dataset is present
!if [ ! -d $LOCAL_EXPERIMENT_DIR/afw ]; then echo 'Data folder not found, please download.'; else echo 'Found Data folder.';fi
In [8]:
# convert datset to required format
import os
from data_utils import convert_dataset
afw_data_path = os.path.join(os.environ["LOCAL_EXPERIMENT_DIR"], 'afw')
afw_image_save_path = os.path.join(os.environ["USER_EXPERIMENT_DIR"], 'afw')
num_keypoints = int(os.environ["NUM_KEYPOINTS"])
if num_keypoints == 80:
    output_json_path = os.path.join(os.environ['LOCAL_DATA_DIR'], 'afw/afw.json')
    %env DATASET_ID=afw
elif num_keypoints == 10:
    output_json_path = os.path.join(os.environ['LOCAL_DATA_DIR'], 'afw_10/afw_10.json')
    %env DATASET_ID=afw_10

convert_dataset(afw_data_path, output_json_path, afw_image_save_path, num_keypoints)
# Note that we are using dummy labels for keypoints 69 to 80 if the NUM_KEYPOINTS=80.

print('Dataset conversion finished.')
In [9]:
# Check the dataset is generated
!if [ ! -f $LOCAL_DATA_DIR/$DATASET_ID/${DATASET_ID}.json ]; then echo 'Labels not found, please regenerate.'; else echo 'Found Labels.';fi
In [10]:
# Sample json label.
!sed -n 1,201p $LOCAL_DATA_DIR/$DATASET_ID/${DATASET_ID}.json
In [11]:
# Sample image.
import os
from IPython.display import Image
Image(filename=os.path.join(afw_data_path, '134212_1.png'))

B. Obtain pre-trained model

Please follow the instructions in the following to download and verify the pretrain model for fpenet.

For FpeNet pre-trained model please download model: nvidia/tao/fpenet:trainable_v1.0.

After obtaining the pre-trained model, please place the model in $LOCAL_EXPERIMENT_DIR

You will then have the following path-

  • pre-trained model in $LOCAL_EXPERIMENT_DIR/pretrained_models/fpenet_vtrainable_v1.0/model.tlt
In [12]:
# Installing NGC CLI on the local machine.
## Download and install
%env CLI=ngccli_cat_linux.zip
!mkdir -p $LOCAL_PROJECT_DIR/ngccli

# Remove any previously existing CLI installations
!rm -rf $LOCAL_PROJECT_DIR/ngccli/*
!wget "https://ngc.nvidia.com/downloads/$CLI" -P $LOCAL_PROJECT_DIR/ngccli
!unzip -u "$LOCAL_PROJECT_DIR/ngccli/$CLI" -d $LOCAL_PROJECT_DIR/ngccli/
!rm $LOCAL_PROJECT_DIR/ngccli/*.zip 
os.environ["PATH"]="{}/ngccli/ngc-cli:{}".format(os.getenv("LOCAL_PROJECT_DIR", ""), os.getenv("PATH", ""))
In [13]:
# List models available in the model registry.
!ngc registry model list nvidia/tao/fpenet:*
In [14]:
# Create the target destination to download the model.
!mkdir -p $LOCAL_EXPERIMENT_DIR/pretrained_models/
In [15]:
# Download the pretrained model from NGC
!ngc registry model download-version nvidia/tao/fpenet:trainable_v1.0 \
    --dest $LOCAL_EXPERIMENT_DIR/pretrained_models/
In [16]:
!ls -rlt $LOCAL_EXPERIMENT_DIR/pretrained_models/fpenet_vtrainable_v1.0 
In [17]:
# Check the model is present
!if [ ! -f $LOCAL_EXPERIMENT_DIR/pretrained_models/fpenet_vtrainable_v1.0/model.tlt ]; then echo 'Pretrained model file not found, please download.'; else echo 'Found Pretrain model file.';fi

3. Generate tfrecords from labels in json format

  • Create the tfrecords using the dataset_convert command
  • Input is ground truth landmarks and output is tfrecord files
In [18]:
# Modify dataset_config for data preparation
# verify all paths
num_keypoints = int(os.environ["NUM_KEYPOINTS"])
if num_keypoints==80:
    %env DATASET_CONFIG=dataset_config.yaml
elif num_keypoints==10:
    %env DATASET_CONFIG=dataset_config_10.yaml
else:
    print("No dataset config for ", num_keypoints)
!cat $LOCAL_SPECS_DIR/$DATASET_CONFIG
In [19]:
!ls $LOCAL_DATA_DIR/$DATASET_ID
In [20]:
!tao fpenet dataset_convert -e $SPECS_DIR/$DATASET_CONFIG
In [21]:
# check the tfrecords are generated
!if [ ! -d $LOCAL_EXPERIMENT_DIR/data/tfrecords/$DATASET_ID/FpeTfRecords ]; then echo 'Tfrecords folder not found, please generate.'; else echo 'Found Tfrecords folder.';fi

4. Provide training specification

  • Tfrecords for the train datasets
    • In order to use the newly generated tfrecords for training, update the 'tfrecords_directory_path' and 'tfrecord_folder_name' parameters of 'dataset_info' section in the spec file at $SPECS_DIR/experiment_spec.yaml
  • Pre-trained model path
    • Update "pretrained_model_path" in the spec file at $SPECS_DIR/experiment_spec.yaml
    • If you want to train from random weights with your own data, you can enter "null" for "pretrained_model_path" section
  • Augmentation parameters for on the fly data augmentation
  • Other training (hyper-)parameters such as batch size, number of epochs, learning rate etc.
In [22]:
num_keypoints = int(os.environ["NUM_KEYPOINTS"])
if num_keypoints==80:
    %env EXPERIMENT_SPEC=experiment_spec.yaml
elif num_keypoints==10:
    %env EXPERIMENT_SPEC=experiment_spec_10.yaml
else:
    print("No experiment spec for ", num_keypoints)
In [23]:
!cat $LOCAL_SPECS_DIR/$EXPERIMENT_SPEC

5. Run TAO training

  • Provide the sample spec file and the output directory location for models

*Note: The training may take hours to complete. Also, the remaining notebook, assumes that the training was done in single-GPU mode.

In [24]:
!tao fpenet train -e $SPECS_DIR/$EXPERIMENT_SPEC \
                  -r $USER_EXPERIMENT_DIR/models/exp1 \
                  -k $KEY
In [25]:
# check the training folder for generated files
!ls -lh $LOCAL_EXPERIMENT_DIR/models/exp1

6. Evaluate the trained model

In [26]:
!tao fpenet evaluate  -m $USER_EXPERIMENT_DIR/models/exp1 \
                      -k $KEY
In [27]:
# check the kpi predictions file is generated
!if [ ! -f $LOCAL_EXPERIMENT_DIR/models/exp1/kpi_testing_error_per_region.csv ]; then echo 'KPI results file not found!'; else cat $LOCAL_EXPERIMENT_DIR/models/exp1/kpi_testing_error_per_region.csv;fi
# Since keypoints 69 to 80 are dummy labels, error for pupil and ears would be high.

7. Run inference on testing set

In [28]:
!tao fpenet inference -e $SPECS_DIR/$EXPERIMENT_SPEC \
                      -i $SPECS_DIR/inference_sample.json \
                      -r $LOCAL_PROJECT_DIR \
                      -m $USER_EXPERIMENT_DIR/models/exp1/model.tlt \
                      -o $USER_EXPERIMENT_DIR/models/exp1 \
                      -k $KEY
In [29]:
# check the results file is generated
!if [ ! -f $LOCAL_EXPERIMENT_DIR/models/exp1/result.txt ]; then echo 'Results file not found!'; else cat $LOCAL_EXPERIMENT_DIR/models/exp1/result.txt;fi
In [30]:
import os
import cv2
import IPython.display
import PIL.Image
%matplotlib inline
num_keypoints = int(os.environ["NUM_KEYPOINTS"])
if num_keypoints == 80: # not drawing ear points if NUM_KEYPOINTS=80
    num_keypoints=76
# read results
results_file = os.path.join(os.environ['LOCAL_EXPERIMENT_DIR'], 'models/exp1/result.txt')
results = open(results_file, 'r').readlines()[0] # display one image as an example

pred_part = results.strip().split(' ')
# get image path (append root path, if present)
image_path = pred_part[0].replace(os.environ["USER_EXPERIMENT_DIR"], os.environ["LOCAL_EXPERIMENT_DIR"])
# get predictions
fl_res = [float(x) for x in pred_part[1:]]
# read image
img = cv2.imread(image_path)
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# set color for landmarks
fl_color=(0,255,0)
# loop through keypoints and draw on image
for q in range(num_keypoints): # not drawing ear points if NUM_KEYPOINTS=80
    row_pred_x = fl_res[2*q]
    col_pred_y = fl_res[(2*q)+1]
    img_rgb = cv2.circle(img_rgb,(int(row_pred_x), int(col_pred_y)), 1, fl_color, 1)
# display image
IPython.display.display(PIL.Image.fromarray(img_rgb))
# Note that the accuracy is not gauranteed for this visualization example.

8. Deploy / Export

8.1 Export .etlt model

Use the export functionality to export an encrypted model in fp32 format without any optimizations.

In [31]:
!tao fpenet export -m $USER_EXPERIMENT_DIR/models/exp1/model.tlt \
                   -k $KEY \
                   --backend onnx
In [32]:
# check the deployment file is presented
!if [ ! -f $LOCAL_EXPERIMENT_DIR/models/exp1/model.tlt.etlt ]; then echo 'Deployment file not found, please generate.'; else echo 'Found deployment file.';fi

8.2 INT8 Optimization

FPENet model supports int8 inference mode in TensorRT. In order to do this, the model is first calibrated to run 8-bit inferences. This is the process:

  • Provide a directory with set of images to be used for calibration.
  • A calibration tensorfile is generated and saved in --cal_data_file
  • This tensorfile is used to calibrate the model and the calibration table is stored in --cal_cache_file
  • The calibration table in addition to the model is used to generate the int8 tensorrt engine to the path --engine_file

Note: For this example, we generate a calibration tensorfile containing 100 batches of training data. Ideally, it is best to use at least 10-20% of the training data to do so. The more data provided during calibration, the closer int8 inferences are to fp32 inferences.

In [33]:
# Number of calibration samples to use
%set_env NUM_CALIB_SAMPLES=100
In [34]:
!python3 sample_calibration_images.py \
    -a $LOCAL_DATA_DIR/$DATASET_ID/${DATASET_ID}.json \
    -oi $USER_EXPERIMENT_DIR \
    -ni $LOCAL_EXPERIMENT_DIR \
    -o $LOCAL_EXPERIMENT_DIR/data/calibration_samples/ \
    -n $NUM_CALIB_SAMPLES \
    --num_keypoints $NUM_KEYPOINTS \
    --randomize

8.3 Export Deployable INT8 Model

In [35]:
!tao fpenet export -m $USER_EXPERIMENT_DIR/models/exp1/model.tlt \
                   -k $KEY \
                   --engine_file $USER_EXPERIMENT_DIR/models/exp1/model.int8.engine \
                   --data_type int8 \
                   --cal_image_dir $LOCAL_EXPERIMENT_DIR/data/calibration_samples/ \
                   --cal_cache_file $USER_EXPERIMENT_DIR/models/exp1/int8_calibration.bin \
                   --cal_data_file $USER_EXPERIMENT_DIR/models/exp1/int8_calibration.tensorfile \
                   --batches 100 \
                   --backend onnx

8.4 Run Inference on Exported INT8 Engine File

In [36]:
!tao fpenet inference -e $SPECS_DIR/$EXPERIMENT_SPEC \
                      -i $SPECS_DIR/inference_sample.json \
                      -r $LOCAL_PROJECT_DIR \
                      -m $USER_EXPERIMENT_DIR/models/exp1/model.int8.engine \
                      -o $USER_EXPERIMENT_DIR/models/exp1 \
                      -k $KEY
In [37]:
# check the results file is generated
!if [ ! -f $LOCAL_EXPERIMENT_DIR/models/exp1/result.txt ]; then echo 'Results file not found!'; else cat $LOCAL_EXPERIMENT_DIR/models/exp1/result.txt;fi