Linux / amd64
Linux / arm64
NVIDIA Modulus is an open-source framework for building, training, and fine-tuning Physics-ML models.
With NVIDIA Modulus, we aim to provide researchers and industry specialists, with various tools that will help accelerate your development of such models for the scientific discipline of your need. Whether you are exploring the use of Neural operators like Fourier Neural Operators or interested in Physics informed Neural Networks or a hybrid approach in between, Modulus provides you with the optimized stack that will enable you to train your models at real world scale.
This is the NVAIE container of Modulus. For opensource container, refer: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/modulus/containers/modulus
Visit the NVIDIA Modulus for more information. Modulus Documentation
If you have Docker 19.03 or later, a typical command to launch the container with an interactive bash terminal is:
docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 --runtime nvidia --rm -it nvidia/modulus/modulus-sfb:xx.xx bash
Where xx.xx is the container version. For example, 24.01.
Once inside the container, you can clone the Modulus repositories from GitHub and use the samples and examples provided to get started with Modulus. Refer the Getting Started Guide for more details.
Jobs using the Modulus NGC Container on Base Command Platform clusters can be launched either by using the NGC CLI tool or by using the Base Command Platform Web UI. To use the NGC CLI tool, configure the Base Command Platform user, team, organization, and cluster information using the ngc config command as described here.
An example command to launch the container on a single-GPU instance is:
ngc batch run --name "My-1-GPU-Modulus-job" --instance dgxa100.80g.1.norm --commandline "sleep 30" --result /results --image "nvidia/modulus/modulus:24.01"
For details on running Modulus in Multi-GPU/Multi-Node configuration, refer this Technical Blog and Modulus Documentation
For more details on running on DGX Cloud, please refer NVIDIA BCP User Guide
Modulus can be used on public cloud instances like AWS, GCP, and Azure. To run Modulus,
For key features, refer NVIDIA Modulus Release Notes
Modulus reference applications
For optimal performance, deploy the supported NVIDIA AI Enterprise Infrastructure software with this NIM.
Please review the Security Scanning tab to view the latest security scan results. For certain open-source vulnerabilities listed in the scan results, NVIDIA provides a response in the form of a Vulnerability Exploitability eXchange (VEX) document. The VEX information can be reviewed and downloaded from the Security Scanning tab.
Get access to knowledge base articles and support cases or submit a ticket.
Visit the NVIDIA AI Enterprise Documentation Hub for release documentation, deployment guides and more.
This container is licensed under the NVIDIA AI Product Agreement. By pulling and using this container, you accept the terms and conditions of this license.
Go to the NVIDIA Licensing Portal to manage your software licenses. licensing portal for your products. Get Your Licenses