NGC Catalog
CLASSIC
Welcome Guest
Containers
Pytorch LTSB 2

Pytorch LTSB 2

For copy image paths and more information, please view on a desktop device.
Logo for Pytorch LTSB 2
Associated Products
Features
Description
PyTorch is a GPU accelerated tensor computational framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. Automatic differentiation is done with a tape-based system at the functional and neural network layer levels.
Publisher
NVIDIA
Latest Tag
23.08-lws2.1.0-py3
Modified
May 3, 2025
Compressed Size
11.42 GB
Multinode Support
No
Multi-Arch Support
Yes
23.08-lws2.1.0-py3 (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

Linux / arm64

Sorry, your browser does not support inline SVG.

What Is PyTorch?

PyTorch is a GPU-accelerated tensor computational framework that offers a high degree of flexibility and speed for deep learning. It integrates seamlessly with popular Python libraries such as NumPy, SciPy, and Cython, extending its functionality to meet the diverse needs of users. PyTorch also employs a tape-based system for automatic differentiation at both the functional and neural network layer level, ensuring accelerated NumPy-like functionality.

Running PyTorch

Before you can run an NGC deep learning framework container, your Docker environment must support NVIDIA GPUs. To run a container, issue the appropriate command as explained in the Running A Container chapter in the NVIDIA Containers And Frameworks User Guide and specify the registry, repository, and tags. For more information about using NGC, refer to the NGC Container User Guide.

Procedure

  1. Select the Tags tab and locate the container image release that you want to run.
  2. In the Pull Tag column, click the icon to copy the docker pull command.
  3. Open a command prompt and paste the pull command. The pulling of the container image begins. Ensure the pull completes successfully before proceeding to the next step.
  4. Run the container image. To run the container, choose interactive mode or non-interactive mode.
  • Interactive mode:

          docker run --gpus all -it --rm -v local_dir:container_dir nvcr.io/nvaie/pytorch:xx.xx-py3
    
  • Non-interactive mode:

         docker run --gpus all --rm -v local_dir:container_dir nvcr.io/nvaie/pytorch:xx.xx-py3
    

Where:

  • -it means run in interactive mode

  • --rm will delete the container when finished

  • -v is the mounting directory

  • local_dir is the directory or file from your host system (absolute path) that you want to access from inside your container. For example, the local_dir in the following path is /home/jsmith/data/mnist.

    -v /home/jsmith/data/mnist:/data/mnist

      If you are inside the container, for example, `ls /data/mnist`, you will see the same files as if you issued the `ls /home/jsmith/data/mnist` command from outside the container.
    
  • container_dir is the target directory when you are inside your container. For example, /data/mnist is the target directory in the example:

      -v /home/jsmith/data/mnist:/data/mnist
    
  • xx.xx is the container version. For example, 21.07.

  • command is the command you want to run in the image.

    You might want to pull in data and model descriptions from locations outside the container for use by Torch. To accomplish this, the

easiest method is to mount one or more host directories as Docker data volumes. You have pulled the latest files and run the container image.

Note: DIGITS uses shared memory to share data between processes. For example, if you use Torch multiprocessing for multi-threaded data loaders, the default shared memory segment size that the container runs with may not be enough. Therefore, you should increase the shared memory size by issuing either:

     --ipc=host

or

     --shm-size=

in the command line to:

        docker run --gpus all
  1. See /workspace/README.md inside the container for information on customizing your PyTorch image.

Suggested Reading

For the latest Release Notes, see the PyTorch Release Notes Documentation website. For a full list of the supported software and specific versions that come packaged with this framework based on the container image, see the Frameworks Support Matrix.

For more information about PyTorch, including tutorials, documentation, and examples, see:

  • PyTorch website
  • PyTorch project

Compatible Infrastructure Software Versions

For the optimized performance, it is highly recommended to deploy the supported NVIDIA AI Enterprise Infrastructure software in conjunction with your AI software. This release is compatible with NVIDIA AI Enterprise Infrastructure 4.4.

Security Vulnerabilities in Open Source Packages

Please review the Security Scanning tab to view the latest security scan results. For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning tab.

Known Issues:

Collection of ftrace events may not work correctly, a newer version such as Nsight Systems 2024.5.4 from JetPack 6.1 or JetPack 5.1 could be used instead to collect ftrace events. Profiling from Nsight Systems GUI on IGX with a discrete GPU might not work, as well as connecting to such a devkit from a Ubuntu x86_64 host over SSH. In this case, please use the Nsight Systels command line (nsys) directly on the target.

License

By pulling and using the container, you accept the terms and conditions of this End User License Agreement.