NGC | Catalog
CatalogContainersGaze Demo for Jetson/L4T

Gaze Demo for Jetson/L4T

For copy image paths and more information, please view on a desktop device.
Logo for Gaze Demo for Jetson/L4T

Description

Gaze Demo container showcasing gaze detection running on Jetson.

Publisher

NVIDIA

Latest Tag

r32.4.2

Modified

September 1, 2023

Compressed Size

2.32 GB

Multinode Support

No

Multi-Arch Support

No

r32.4.2 (Latest) Scan Results

Linux / amd64

Gaze Demo Container for Jetson

The gaze demo container contains a demo of running gaze detection model on Jetson. The container supports running gaze detection on a video file input.

The container has 3 models:

MTCNN model for face detection with input image resolution of 260X135. The model was converted from Caffe to TensorRT.

NVIDIA Facial landmarks model with input resolution of 80X80 per face. The model was converted from TensorFlow to TensorRT.

NVIDIA Gaze model with input resolution of 224X224 per left eye, right eye and whole face. The model was converted from TensorFlow to TensorRT.

Note that the gaze demo currently has TensorRT engine files built for Jetson AGX Xavier and Jetson Xavier NX and hence this demo can be run on Jetson AGX Xavier or Jetson Xavier NX only.

The container requires JetPack 4.4 Developer Preview (L4T R32.4.2)

Running Gaze Detection Demo

Prerequisites

Ensure these prerequisites are available on your system:

  1. Jetson device running L4T r32.4.2

  2. JetPack 4.4 Developer Preview (DP)

Pulling the container

First, pull the container image:

sudo docker pull nvcr.io/nvidia/jetson-gaze:r32.4.2

Running the container

To run gaze detection on a built-in video, run the following commands:

sudo xhost +si:localuser:root
sudo docker run --runtime nvidia -it --rm --network host -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/jetson-gaze:r32.4.2 python3 run_gaze_sequential.py /videos/gaze_video.mp4 --loop --codec=h264

To run gaze detection on a your own video (.h264 format), run the following commands (you would need -v option to mount your video directory)

sudo xhost +si:localuser:root
sudo docker run --runtime nvidia -it --rm --network host -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix -v /my_video_directory/:/userVideos nvcr.io/nvidia/jetson-gaze:r32.4.2 python3 run_gaze_sequential.py /userVideos/my_video_name --loop --codec=h264

Replace my_video_directory with the full path to the directory where you have saved your video and replace my_video_name with the name of your video.

Running the container as part of cloud native demo on Jetson

Cloud native demo on Jetson showcases how Jetson is bringing cloud native methodolgoies like containarizaton to the edge. The demo is built around the example use case of AI applications for service robots and show cases people detection, pose detection, gaze detection and natural language processing all running simultaneously as containers on Jetson.

Please follow for instructions in https://github.com/NVIDIA-AI-IOT/jetson-cloudnative-demo gitlab on running People detection demo container as part of the cloud native demo.

License

The gaze demo container includes various software packages with their respective licenses included within the container.

Getting Help & Support

If you have any questions or need help, please visit the Jetson Developer Forums.