Generating synthetic data in the cloud is key for scaling deep learning workflows. In this container you will have access to the container of Omniverse Code, an integrated development environment (IDE) for developers that empowers users to build to generate synthetic data by exposing Omniverse Replicator.
Omniverse Replicator is a highly extensible framework built on a scalable Omniverse platform that enables physically accurate 3D synthetic data generation to accelerate training and performance of AI perception networks.
For a complete list of updates and features, view Replicator Documentation here.
About Replicator Container
This release is offered as a container that runs locally or on NVIDIA RTX equipped Amazon Web Services (AWS) EC2 instances. This cloud-based delivery provides the latest RTX graphics and performance to any desktop system without requiring local NVIDIA RTX GPUs.
Using the PyTorch NGC Container requires the host system to have the following installed:
For supported versions, see the NVIDIA Container Toolkit Documentation.
No other installation, compilation, or dependency management is required. It is not necessary to install the NVIDIA CUDA Toolkit.
Prerequisites for deploying in the cloud
To use the Replicator container in AWS, please follow the instructions detailed documentation here. Following those instructions will get you an Ubuntu machine with graphics ready to run the container.
Starting Omniverse Replicator Container
To run a container, refer to Running A Container chapter in the NVIDIA Containers For Deep Learning Frameworks User’s Guide and specify the registry, repository, and tags. For more information about using NGC, refer to the NGC Container User Guide.
To pull the container first make sure to log in to the NGC docker registry.
If you have Docker 19.03 or later, a typical command to launch the container is:
docker run --gpus all --entrypoint /bin/bash -it nvcr.io/nvidia/omniverse-replicator:xx
If you have Docker 19.02 or earlier, a typical command to launch the container is:
nvidia-docker run --entrypoint /bin/bash -it nvcr.io/nvidia/omniverse-replicator:xx
- xx is the container version. For example, 1.5.3-r1.
Running Omniverse Replicator
Within the container you are ready to run Omniverse Replicator. From the container, you can run the script shown here.You must copy over the script into the container. With the following command:
./startup.sh --allow-root --no-window --/omni/replicator/script=test.py
After about a few minutes, you will see an
_output folder with ten images of basic shapes. Notice that the script may not exit cleanly and may show errors. This is expected, as Kit is failing to find a display due to this been headless. This is not a blocker.
Note: Launching this for the first time will have a start up time of about two minutes, consecutive runs will be much faster. For fixing this, check the next section.
Accelerating Start up time
You will notice that the first time you launch the container, it has a lengthy start up time of about 2 minutes due to compiling shaders regardless of how much data you are generating. To minimize the start up time, and make sure you can deploy the container in the machine over and over again without the start up time you can follow the next steps:
docker run --gpus all --entrypoint /bin/bash -it nvcr.io/nvstaging/omniverse-replicator:xx
Within the container run:
After that script has run, on a different terminal commit the container (for more info on docker commit, click here)
docker commit [OPTIONS] CONTAINER omniverse-replicator-startup:v1
CONTAINER here refers to the container on the other terminal. You can find it using
docker container ls
After running this, you can close the container and run with the container you committed. Shaders will recompile if you launch this container on a new machine or if the driver is slightly different. Even a patch will make the difference.
By pulling and using the container, you accept the terms and conditions of the NVIDIA Omniverse License Agreement.