Linux / amd64
NVIDIA Modulus is a toolkit for developing AI enabled physics-ML applications.
With NVIDIA Modulus, we aim to provide researchers and industry specialists, various tools that will help accelerate your development of such models for the scientific discipline of your need.
Visit the Nvidia Modulus for more information.
If you have Docker 19.03 or later, a typical command to launch the container with a interactive bash terminal is:
docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 --runtime nvidia --rm -it nvidia/modulus/modulus:xx.xx bash
Where,
Once inside the container, you can clone the Modulus repositories from GitHub and use the samples and examples provided to get started with Modulus.
Jobs using the Pytorch NGC Container on Base Command Platform clusters can be launched either by using the NGC CLI tool or by using the Base Command Platform Web UI. To use the NGC CLI tool, configure the Base Command Platform user, team, organization, and cluster information using the ngc config command as described here.
An example command to launch the container on a single-GPU instance is:
ngc batch run --name "My-1-GPU-Modulus-job" --instance dgxa100.80g.1.norm --commandline "sleep 30" --result /results --image "nvidia/modulus/modulus:23.05"
For details on running Modulus in Multi-GPU/Multi-Node configuration, refer this Technical Blog and Modulus Documentation
Please visit the Modulus Forum for :
By pulling and using the container, you accept the terms and conditions of this SOFTWARE DEVELOPER KITS, SAMPLES AND TOOLS LICENSE AGREEMENT.