PaddlePaddle provides an intuitive and flexible interface for loading data and specifying model structures. It supports CNN, RNN, multiple variants and configures complicated deep models easily.
It also provides extremely optimized operations, memory recycling, and network communication. PaddlePaddle makes it easy to scale heterogeneous computing resources and storage to accelerate the training process.
Before running the container, use
docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.
In the Tags section, locate the container image release that you want to run.
In the Pull column, click the icon to copy the
docker pull command.
Open a command prompt and paste the pull command. The pulling of the container image begins. Ensure the pull completes successfully before proceeding to the next step.
Run the container image. To run the container in the choose interactive mode
nvidia-docker run -it --rm -v local_dir:container_dir nvcr.io/nvidia/paddle:<xx.xx>
-it means run in interactive mode
--rm will delete the container when finished
-v is the mounting directory
local_dir is the directory or file from your host system (absolute path) that you want to access from inside your container. For example, the
local_dir in the following path is
If you are inside the container, for example,
ls /data/mnist, you will see the same files as if you issued the
ls /home/jsmith/data/mnist command from outside the container.
container_dir is the target directory when you are inside your container. For example,
/data/mnist is the target directory in the example:
<xx.xx> is the tag. For example,
For more information about PaddlePaddle, see: