Organizations around the world are taking advantage of performance-optimized NVIDIA AI software, from the NGC catalog, to easily develop and deploy their AI applications on-prem and in the cloud.
As enterprises increase their reliance on hybrid multi-cloud, they need a consistent software stack across data centers systems and cloud platforms to seamlessly run their AI applications.
By using NVIDIA’s Virtual Machine Images (VMI), enterprises can build an application once and easily run the exact same version on different clouds making a multi-cloud strategy cost effective and quick to adopt. Specifically, NVIDIA VMI unlocks access to NVIDIA AI frameworks and software development kits (SDKs) through the NGC Catalog enabling developers to easily build, customize, and deploy GPU-accelerated AI solutions and data science workloads on any cloud platform.
The VMI is tested on AI software from the NGC catalog to deliver optimized performance and is updated quarterly with the latest drivers, security patches, and support for the latest GPUs.
Let's see it in action.
NVIDIA VMIs provide an operating system environment for running NVIDIA GPU accelerated software in the cloud. These VM images are built on top of Ubuntu OS and are packaged with core dependencies. VMIs provide a GPU-Optimized development environment for your GPU accelerated application on cloud service provider’s infrastructure.
NVIDIA AI software includes GPU-Optimized SDKs and libraries that accelerate your AI application on NVIDIA GPUs. These SDKs require the correct configuration of NVIDIA driver, CUDA and other dependencies like Docker, NVIDIA Docker-toolkit etc. Developing and deploying your AI application on top of NVIDIA VMIs provide you the following benefits
Higher productivity: NVIDIA VMIs eliminate the need to manually install and configure the OS, NVIDIA GPU and Network drivers, CUDA, and Docker runtime, so you can get started right away on any GPU-powered instance on your favorite cloud.
Maximum portability: Using NVIDIA VMI, you can develop models once and deploy them on any hybrid or multi-cloud configuration (e.g., hybrid, multi-cloud environment).
Optimized software: VMIs are updated on regular cadence with the latest software stack and validated for maximum performance. These free updates let you get more out of your GPU instance.
Enterprise support: Get Enterprise support for NVIDIA AI in the cloud
Simplified workflows: NVIDIA also offers dedicated VMIs for Deep Learning & HPC applications, giving you an out-of-the-box experience with GPU-optimized software from the NGC catalog. Simply run the workloads without downloading additional software.
The VMIs are packaged with the following dependencies:
The following section provides information on how to get started developing your application on NVIDIA full stack in the cloud.
NVIDIA VMIs (or AMIs: Amazon Machine Images) are available on AWS marketplace: NVIDIA AMI on AWS
Using these AMI, you can spin up a GPU-accelerated EC2 VM instance in minutes with a pre-installed Ubuntu OS, GPU driver, Docker and NVIDIA container toolkit. For step by step guide on how to use the VMI on EC2 please refer to AMI documentation
NVIDIA VMIs are available on GCP marketplace: NVIDIA VMI on GCP
Using these VMIs, you can spin up a GPU-accelerated GCP compute VM instance in minutes with a pre-installed Ubuntu OS, GPU driver, Docker and NVIDIA container toolkit.
For step by step guide on how to use the VMI on GCP compute instance please refer to VMI documentation
NVIDIA VMIs are available on Azure marketplace: NVIDIA VMI on Azure
Using these VMIs, you can spin up a GPU-accelerated Azure compute VM instance in minutes with a pre-installed Ubuntu OS, GPU driver, Docker and NVIDIA container toolkit.
For step by step guide on how to use the VMI on Azure compute instance please refer to VMI documentation
NVIDIA VMIs are available on Oracle Cloud marketplace: NVIDIA VMI on Oracle Cloud
Using these VMIs, you can spin up a GPU-accelerated Oracle Cloud compute VM instance in minutes with a pre-installed Ubuntu OS, GPU driver, Docker and NVIDIA container toolkit.
For step by step guide on how to use the VMI on Oracle compute instance please refer to VMI documentation
NVIDIA VMIs are available on Alibaba Cloud marketplace: NVIDIA VMI
Using these VMIs, you can spin up a GPU-accelerated compute VM instance on Alibaba Cloud international or China region in minutes with a pre-installed Ubuntu OS, GPU driver, Docker and NVIDIA container toolkit.
For step by step guide on how to use the VMI on Alibaba compute instance please refer to VMI documentation
This collection contains a sample container and Jupyter notebook for you to validate & get started using NVIDIA GPU-Optimized VMI on your favorite cloud platform. Provision a GPU compute instance on your cloud provider as mentioned above while selecting NVIDIA GPU-Optimized VMI from the cloud marketplace. Once you SSH into the instance, follow the steps to run the Jupyter notebook and/or the container.
Most of the containers hosted on NGC can run on NVIDIA VMIs in the cloud seamlessly. The following are few examples attached with this collection:
Fashion-MNIST Example with the NVIDIA TensorFlow Container
docker run -it --gpus all -p 8888:8888 --network=host nvcr.io/nvidia/tensorflow:<tag>
jupyter lab --ip=0.0.0.0 --port=8888 --allow-root
There you have it! You can start training or fine-tuning your Fashion MNIST model right away using the TensorFlow container on your favorite Cloud instance. Similarly, you can run any of the containers from NGC seamlessly on cloud instances using NVIDIA VMIs.