NGC | Catalog

Gaze Estimation using TAO GazeNet

Transfer learning is the process of transferring learned features from one application to another. It is a commonly used training technique where you use a model trained on one task and re-train to use it on a different task.

Train Adapt Optimize (TAO) Toolkit is a simple and easy-to-use Python based AI toolkit for taking purpose-built AI models and customizing them with users' own data.

Learning Objectives

In this notebook, you will learn how to leverage the simplicity and convenience of TAO to:

  • Take a pretrained model and train a GazeNet model on subset of MPIIFaceGaze dataset
  • Run Inference on the trained model
  • Export the retrained model to a .etlt file for deployment to DeepStream SDK

Table of Contents

This notebook shows an example of gaze estimation using GazeNet in the Train Adapt Optimize (TAO) Toolkit.

  1. Set up env variables, map drives, and install dependencies
  2. Install the TAO launcher
  3. Prepare dataset and pre-trained model
    2.1 Verify downloaded dataset
    2.2 Convert datasets and labels to required format
    2.3 Verify dataset generation
    2.4 Download pre-trained model
  4. Generate tfrecords from labels in json format
  5. Provide training specification
  6. Run TAO training
  7. Evaluate trained models
  8. Run inference for a set of images
  9. Deploy

0. Set up env variables, map drives and install dependencies

When using the purpose-built pretrained models from NGC, please make sure to set the $KEY environment variable to the key as mentioned in the model overview. Failing to do so, can lead to errors when trying to load them as pretrained models.

The following notebook requires the user to set an env variable called the $LOCAL_PROJECT_DIR as the path to the users' workspace. Please note that the dataset to run this notebook is expected to reside in the $LOCAL_PROJECT_DIR/gazenet/data, while the TAO experiment generated collaterals will be output to $LOCAL_PROJECT_DIR/gazenet. More information on how to set up the dataset and the supported steps in the TAO workflow are provided in the subsequent cells.

Note: This notebook currently is by default set up to run training using 1 GPU. To use more GPU's please update the env variable $NUM_GPUS accordingly

In [1]:
# Setting up env variables for cleaner command line commands.
import os

%env KEY=nvidia_tlt
%env NUM_GPUS=1
%env USER_EXPERIMENT_DIR=/workspace/tao-experiments/gazenet
%env DATA_DOWNLOAD_DIR=/workspace/tao-experiments/gazenet/data

# Set this path if you don't run the notebook from the samples directory.
# %env NOTEBOOK_ROOT=~/tao-samples/gazenet

# Please define this local project directory that needs to be mapped to the TAO docker session.
# The dataset is expected to be present in $LOCAL_PROJECT_DIR/gazenet/data, while the results for the steps
# in this notebook will be stored at $LOCAL_PROJECT_DIR/gazenet
# !PLEASE MAKE SURE TO UPDATE THIS PATH!.
%env LOCAL_PROJECT_DIR=FIXME

# $SAMPLES_DIR is the path to the sample notebook folder and the dependency folder
# $SAMPLES_DIR/deps should exist for dependency installation
%env SAMPLES_DIR=FIXME

os.environ["LOCAL_DATA_DIR"] = os.path.join(
    os.getenv("LOCAL_PROJECT_DIR", os.getcwd()),
    "gazenet/data"
)
os.environ["LOCAL_EXPERIMENT_DIR"] = os.path.join(
    os.getenv("LOCAL_PROJECT_DIR", os.getcwd()),
    "gazenet"
)

# The sample spec files are present in the same path as the downloaded samples.
os.environ["LOCAL_SPECS_DIR"] = os.path.join(
    os.getenv("NOTEBOOK_ROOT", os.getcwd()),
    "specs"
)
%env SPECS_DIR=/workspace/tao-experiments/gazenet/specs

# Showing list of specification files.
!ls -rlt $LOCAL_SPECS_DIR

The cell below maps the project directory on your local host to a workspace directory in the TAO docker instance, so that the data and the results are mapped from in and out of the docker. For more information please refer to the launcher instance in the user guide.

When running this cell on AWS, update the drive_map entry with the dictionary defined below, so that you don't have permission issues when writing data into folders created by the TAO docker.

drive_map = {
    "Mounts": [
            # Mapping the data directory
            {
                "source": os.environ["LOCAL_PROJECT_DIR"],
                "destination": "/workspace/tao-experiments"
            },
            # Mapping the specs directory.
            {
                "source": os.environ["LOCAL_SPECS_DIR"],
                "destination": os.environ["SPECS_DIR"]
            },
        ],
    "DockerOptions": {
        "user": "{}:{}".format(os.getuid(), os.getgid())
    }
}
In [2]:
# Mapping up the local directories to the TAO docker.
import json
mounts_file = os.path.expanduser("~/.tao_mounts.json")

# Define the dictionary with the mapped drives
drive_map = {
    "Mounts": [
        # Mapping the data directory
        {
            "source": os.environ["LOCAL_PROJECT_DIR"],
            "destination": "/workspace/tao-experiments"
        },
        # Mapping the specs directory.
        {
            "source": os.environ["LOCAL_SPECS_DIR"],
            "destination": os.environ["SPECS_DIR"]
        },
    ]
}

# Writing the mounts file.
with open(mounts_file, "w") as mfile:
    json.dump(drive_map, mfile, indent=4)
In [3]:
!cat ~/.tao_mounts.json
In [4]:
# Install requirement
!pip3 install -r $SAMPLES_DIR/deps/requirements-pip.txt

1. Install the TAO launcher

The TAO launcher is a python package distributed as a python wheel listed in the nvidia-pyindex python index. You may install the launcher by executing the following cell.

Please note that TAO Toolkit recommends users run the TAO launcher in a virtual env with python 3.6.9. You may follow the instruction on this page to set up a python virtual env using the virtualenv and virtualenvwrapper packages. Once you have set up virtualenvwrapper, please set the version of python to be used in the virtual env by using the VIRTUALENVWRAPPER_PYTHON variable. You may do so by running

export VIRTUALENVWRAPPER_PYTHON=/path/to/bin/python3.x

where x >= 6 and <= 8

We recommend performing this step first and then launching the notebook from the virtual environment. In addition to installing TAO python package, please make sure of the following software requirements:

  • python >=3.6.9 < 3.8.x
  • docker-ce > 19.03.5
  • docker-API 1.40
  • nvidia-container-toolkit > 1.3.0-1
  • nvidia-container-runtime > 3.4.0-1
  • nvidia-docker2 > 2.5.0-1
  • nvidia-driver > 455+

Once you have installed the pre-requisites, please log in to the docker registry nvcr.io by following the command below

docker login nvcr.io

You will be triggered to enter a username and password. The username is $oauthtoken and the password is the API key generated from ngc.nvidia.com. Please follow the instructions in the NGC setup guide to generate your own API key.

In [5]:
# Skip this cell if the TAO launcher was already installed.
!pip3 install nvidia-pyindex
!pip3 install nvidia-tao
In [6]:
# View the version of the TAO launcher
!tao info

2. Prepare dataset and pre-trained model

This notebook uses a subset of MPIIFaceGaze dataset to illustrate the input data format for GazeNet and the procedures to use the generated data.

Please download the MPIIFaceGaze dataset from the following website: https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/research/gaze-based-human-computer-interaction/its-written-all-over-your-face-full-face-appearance-based-gaze-estimation

The labels for this subset based on required json format can be obtained from: $SAMPLES_DIR/gazenet/sample_labels

In [7]:
# check if the label file is presented
!if [ ! -f $SAMPLES_DIR/gazenet/sample_labels/data_factory.zip ]; then echo 'Label file not found, please check your sample path.'; else echo 'Found label file.';fi

After downloading the data, please unzip it to the MPIIFaceGaze folder and place the folder in $DATA_DOWNLOAD_DIR

After downloading the labels, please unzip it to the data_factory folder and place the folder in MPIIFaceGaze

You will then have the following path

  • input data in $LOCAL_DATA_DIR/MPIIFaceGaze
  • labels in $LOCAL_DATA_DIR/MPIIFaceGaze/data_factory

A. Verify downloaded dataset

In [8]:
# Check the dataset is present
!mkdir -p $LOCAL_DATA_DIR
!if [ ! -d $LOCAL_DATA_DIR/MPIIFaceGaze ]; then echo 'Data folder not found, please download.'; else echo 'Found Data folder.';fi
!if [ ! -d $LOCAL_DATA_DIR/MPIIFaceGaze/data_factory ]; then echo 'Label folder not found, please download.'; else echo 'Found Labels folder.';fi
In [9]:
# Sample json label.
!sed -n 1,201p $LOCAL_DATA_DIR/MPIIFaceGaze/data_factory/day03/p01/p01_day03.json

B. Convert datasets and labels to required format

A script is provided to convert the subset of MPIIFaceGaze dataset and downloaded labels to a required folder structure and dataset format.

In [10]:
!python3 mpiifacegaze_convert.py --data_path $LOCAL_DATA_DIR/MPIIFaceGaze \
                                 --json_label_root_path $LOCAL_DATA_DIR/MPIIFaceGaze

C. Verify dataset generation

A dataset folder with above-mentioned subset is created. All the required data to run GazeNet is saved under this folder.

  • Generated data folder in $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/p01-day03
  • Generated inference data folder in $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/inference-set

The converted dataset should have the following structure.

  • Config folder in $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/p01-day03/Config
  • Data folder in $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/p01-day03/Data
  • Labels folder in $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/p01-day03/json_datafactory_v2

The inference dataset should have the following structure.

  • Config folder in $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/inference-set/Config
  • Data folder in $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/inference-set/Data
  • Labels folder in $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/inference-set/json_datafactory_v2
In [11]:
# Check the generated data is present
!if [ ! -d $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/p01-day03 ]; then echo 'Generated data folder not found, please regenerated.'; else echo 'Found generated data folder.';fi
!if [ ! -d $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/p01-day03/Config ]; then echo 'Config folder not found, please regenerated.'; else echo 'Found Config folder.';fi
!if [ ! -d $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/p01-day03/Data ]; then echo 'Data folder not found, please regenerated.'; else echo 'Found Data folder.';fi
!if [ ! -d $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/p01-day03/json_datafactory_v2 ]; then echo 'Labels folder not found, please regenerated.'; else echo 'Found Labels folder.';fi

# Check the inference data is present
!if [ ! -d $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/inference-set ]; then echo 'Inference data folder not found, please regenerated.'; else echo 'Found inference data folder.';fi
!if [ ! -d $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/inference-set/Config ]; then echo 'Config folder not found, please regenerated.'; else echo 'Found Config folder.';fi
!if [ ! -d $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/inference-set/Data ]; then echo 'Data folder not found, please regenerated.'; else echo 'Found Data folder.';fi
!if [ ! -d $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/inference-set/json_datafactory_v2 ]; then echo 'Labels folder not found, please regenerated.'; else echo 'Found Labels folder.';fi

D. Download pre-trained model

Please follow the instructions in the following to download and verify the pretrained model for gazenet.

For GazeNet pretrained model please download model: nvidia/tao/gazenet:trainable_v1.0.

After downloading the pre-trained model, please place the files in $LOCAL_EXPERIMENT_DIR/pretrain_models You will then have the following path

  • pretrained model in $LOCAL_EXPERIMENT_DIR/pretrain_models/gazenet_vtrainable_v1.0/model.tlt
In [12]:
# Installing NGC CLI on the local machine.
## Download and install
%env CLI=ngccli_cat_linux.zip
!mkdir -p $LOCAL_PROJECT_DIR/ngccli

# Remove any previously existing CLI installations
!rm -rf $LOCAL_PROJECT_DIR/ngccli/*
!wget "https://ngc.nvidia.com/downloads/$CLI" -P $LOCAL_PROJECT_DIR/ngccli
!unzip -u "$LOCAL_PROJECT_DIR/ngccli/$CLI" -d $LOCAL_PROJECT_DIR/ngccli/
!rm $LOCAL_PROJECT_DIR/ngccli/*.zip 
os.environ["PATH"]="{}/ngccli:{}".format(os.getenv("LOCAL_PROJECT_DIR", ""), os.getenv("PATH", ""))
In [13]:
# List models available in the model registry.
!ngc registry model list nvidia/tao/gazenet:*
In [14]:
# Create the target destination to download the model.
!mkdir -p $LOCAL_EXPERIMENT_DIR/pretrain_models/
In [15]:
# Download the pretrained model from NGC
!ngc registry model download-version nvidia/tao/gazenet:trainable_v1.0 \
    --dest $LOCAL_EXPERIMENT_DIR/pretrain_models/
In [16]:
!ls -rlt $LOCAL_EXPERIMENT_DIR/pretrain_models/gazenet_vtrainable_v1.0
In [17]:
# Check the dataset is present
!if [ ! -f $LOCAL_EXPERIMENT_DIR/pretrain_models/gazenet_vtrainable_v1.0/model.tlt ]; then echo 'Pretrain model file not found, please download.'; else echo 'Found Pretrain model file.';fi

3. Generate tfrecords from labels in json format

  • Create the tfrecords using the dataset_convert command
In [18]:
!tao gazenet dataset_convert -folder-suffix pipeline \
                             -norm_folder_name Norm_Data \
                             -sets p01-day03 \
                             -data_root_path $DATA_DOWNLOAD_DIR/MPIIFaceGaze/sample-dataset
In [19]:
!ls -rl $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset
In [20]:
# check the tfrecords are presented
!if [ ! -d $LOCAL_DATA_DIR/MPIIFaceGaze/sample-dataset/p01-day03/Ground_Truth_DataFactory_pipeline ]; then echo 'Tfrecords folder not found, please generate.'; else echo 'Found Tfrecords folder.';fi

4. Provide training specification

  • Tfrecords for the train datasets
    • In order to use the newly generated tfrecords for training, update the 'ground_truth_folder_name' and 'tfrecords_directory_path' parameters of 'dataset_info' section in the spec file at $SPECS_DIR/gazenet_tlt_pretrain.yaml
  • Pre-trained model path
    • Update "pretrained_model_path" in the spec file at $SPECS_DIR/gazenet_tlt_pretrain.yaml
    • If you want to training from random weights with your own data, you can enter "null" for "pretrained_model_path" section
  • Augmentation parameters for on the fly data augmentation
  • Other training (hyper-)parameters such as batch size, number of epochs, learning rate etc.
In [21]:
!cat $LOCAL_SPECS_DIR/gazenet_tlt_pretrain.yaml

5. Run TAO training

  • Provide the sample spec file and the output directory location for models

*Note: The training may take hours to complete. Also, the remaining notebook, assumes that the training was done in single-GPU mode.

In [22]:
!tao gazenet train -e $SPECS_DIR/gazenet_tlt_pretrain.yaml \
                   -r $USER_EXPERIMENT_DIR/experiment_result/exp1 \
                   -k $KEY
In [23]:
!ls -lh $LOCAL_EXPERIMENT_DIR/experiment_result/exp1

6. Evaluate the trained model

In [24]:
!tao gazenet evaluate -type kpi_testing \
                      -m $USER_EXPERIMENT_DIR/experiment_result/exp1 \
                      -e $SPECS_DIR/gazenet_tlt_pretrain.yaml \
                      -k $KEY
In [25]:
!ls -lh $LOCAL_EXPERIMENT_DIR/experiment_result/exp1/KPI_TMP

7. Visualize Inference

In [26]:
!tao gazenet inference -e $SPECS_DIR/gazenet_tlt_pretrain.yaml \
                       -i $DATA_DOWNLOAD_DIR/MPIIFaceGaze/sample-dataset/inference-set \
                       -m $USER_EXPERIMENT_DIR/experiment_result/exp1/model.tlt \
                       -o $USER_EXPERIMENT_DIR/experiment_result/exp1 \
                       -k $KEY
In [27]:
!ls -lh $LOCAL_EXPERIMENT_DIR/experiment_result/exp1/result.txt
In [28]:
import sys
import cv2
import numpy as np
import os
import json
import IPython.display
import PIL.Image
from utils_gazeviz import load_cam_intrinsics,\
        get_landmarks_dict, visualize_frame

# load data
data_root_path = os.path.join(os.environ['LOCAL_DATA_DIR'],
                              'MPIIFaceGaze/sample-dataset/inference-set')
print(data_root_path)
# load calibration
config_path = os.path.join(data_root_path, 'Config')
calib = {}
camera_mat, distortion_coeffs = load_cam_intrinsics(config_path)
distortion_coeffs = distortion_coeffs[0:5]
calib['cam'] = camera_mat
calib['dist'] = distortion_coeffs

# load json files
json_file_folder = os.path.join(data_root_path, 'json_datafactory_v2')
landmarks_dict = get_landmarks_dict(json_file_folder)
assert len(landmarks_dict.keys()) > 0

# visualize each frame in the result file
num_viz_frames = 5
result_path = os.path.join(os.environ['LOCAL_EXPERIMENT_DIR'],
                           "experiment_result/exp1/result.txt")

with open(result_path, 'r') as reader:
    lines = reader.readlines()

num_lines = len(lines)
num_viz_frames = min(num_viz_frames, num_lines)
for k in range(0, num_viz_frames):
    content = lines[k]
    line_info = content.split(' ')
    old_frame_path = line_info[0]
    sub_path = old_frame_path.split(os.environ['DATA_DOWNLOAD_DIR'])[-1]
    frame_path = os.environ['LOCAL_DATA_DIR'] + sub_path
    cam_coord = np.array(line_info[1:4], dtype=np.float32)
    frame_name = frame_path.split('/')[-1]
    landmarks_2D = landmarks_dict[frame_name]
    display_frame, le_px, le_por, re_px, re_por = visualize_frame(frame_path, landmarks_2D, cam_coord, calib)
    # Visualize selected landmarks
    cv2.arrowedLine(display_frame, tuple(le_px), tuple(le_por), (0, 255, 0), thickness=2, tipLength=0.05)
    cv2.arrowedLine(display_frame, tuple(re_px), tuple(re_por), (0, 255, 0), thickness=2, tipLength=0.05)
    IPython.display.display(PIL.Image.fromarray(display_frame))

8. Deploy

In [29]:
!mkdir -p $LOCAL_EXPERIMENT_DIR/experiment_dir_final
# Removing a pre-existing copy of the etlt if there has been any.
import os
output_file=os.path.join(os.environ['LOCAL_EXPERIMENT_DIR'],
                         "experiment_dir_final/gazenet_onnx.etlt")
if os.path.exists(output_file):
    os.system("rm {}".format(output_file))

!tao gazenet export -m $USER_EXPERIMENT_DIR/experiment_result/exp1/model.tlt \
                    -o $USER_EXPERIMENT_DIR/experiment_dir_final/gazenet_onnx.etlt \
                    -t tfonnx \
                    -k $KEY
In [30]:
# check the Deployed file is presented
!if [ ! -f $LOCAL_EXPERIMENT_DIR/experiment_dir_final/gazenet_onnx.etlt ]; then echo 'Deployed file not found, please generate.'; else echo 'Found Deployed file folder.';fi