NGC | Catalog
Welcome Guest
CatalogCollectionsNVIDIA Virtual Machine Images (VMI)

NVIDIA Virtual Machine Images (VMI)

For contents of this collection and more information, please view on a desktop device.
Logo for NVIDIA Virtual Machine Images (VMI)

Description

NVIDIA Virtual Machine Images or VMIs provide a pre-configured, tested, & validated environment in the public cloud to run NGC Catalog assets as well as any GPU accelerated application seamlessly.

Curator

NVIDIA

Modified

June 22, 2022
Containers
Helm Charts
Models
Resources

Overview

Running GPU accelerated applications comes with its own challenges. You need an environment with the right configuration of NVIDIA GPU driver, Docker, NVIDIA Docker toolkit, CLI and other utility tools. Moreover, to accommodate hybrid multi-cloud strategies, developers need to modify applications for the differences in the underlying software stack in the target clouds. This process can be resource and time heavy, so it is critical that developers have access to the necessary tools to move seamlessly across cloud platforms for a quick time to market.

By using NVIDIA’s Virtual Machine Images (VMI), enterprises can build an application once and easily run the exact same version on different clouds making a multi-cloud strategy cost effective and quick to adopt. Specifically, NVIDIA VMI unlocks access to NVIDIA AI frameworks and software development kits (SDKs) through the NGC Catalog enabling developers to easily build, customize, and deploy GPU-accelerated AI solutions and data science workloads on any cloud platform.

Let's see it in action.

image1

What are NVIDIA VMIs?

NVIDIA VMIs provide an operating system environment for running NVIDIA GPU accelerated software in the cloud. These VM images are built on top of Ubuntu OS and are packaged with core dependencies. VMIs provide a GPU-Optimized development environment for your GPU accelerated application on cloud service provider’s infrastructure.

image2

Why develop on NVIDIA platforms in the cloud?

NVIDIA AI software includes GPU-Optimized SDKs and libraries that accelerate your AI application on NVIDIA GPUs. These SDKs require the correct configuration of NVIDIA driver, CUDA and other dependencies like Docker, NVIDIA Docker-toolkit etc. Developing and deploying your AI application on top of NVIDIA VMIs provide you the following benefits

  • Higher productivity: NVIDIA VMIs eliminate the need to manually install and configure the OS, NVIDIA GPU and Network drivers, CUDA, and Docker runtime, so you can get started right away on any GPU-powered instance on your favorite cloud.

  • Maximum portability: Using NVIDIA VMI, you can develop models once and deploy them on any hybrid or multi-cloud configuration (e.g., hybrid, multi-cloud environment).

  • Optimized software: VMIs are updated on regular cadence with the latest software stack and validated for maximum performance. These free updates let you get more out of your GPU instance.

  • Enterprise support: Get Enterprise support for NVIDIA AI in the cloud

  • Simplified workflows: NVIDIA also offers dedicated VMIs for Deep Learning & HPC applications, giving you an out-of-the-box experience with GPU-optimized software from the NGC catalog. Simply run the workloads without downloading additional software.

What do the VMIs contain?

The VMIs are packaged with the following dependencies:

  • Based on Ubuntu Server OS
  • NVIDIA GPU Driver
  • Docker-ce
  • NVIDIA Docker toolkit and runtime
  • Containers, CLI toolkits, and more...

Quick Start

The following section provides information on how to get started developing your application on NVIDIA full stack in the cloud.

image1

Amazon Web Services (AWS)

NVIDIA VMIs (or AMIs: Amazon Machine Images) are available on AWS marketplace: NVIDIA AMI on AWS

Using these AMI, you can spin up a GPU-accelerated EC2 VM instance in minutes with a pre-installed Ubuntu OS, GPU driver, Docker and NVIDIA container toolkit. For step by step guide on how to use the VMI on EC2 please refer to AMI documentation

image1

Google Cloud Platform (GCP)

NVIDIA VMIs are available on GCP marketplace: NVIDIA VMI on GCP

Using these VMIs, you can spin up a GPU-accelerated GCP compute VM instance in minutes with a pre-installed Ubuntu OS, GPU driver, Docker and NVIDIA container toolkit.

image1

For step by step guide on how to use the VMI on GCP compute instance please refer to VMI documentation

Microsoft Azure

NVIDIA VMIs are available on Azure marketplace: NVIDIA VMI on Azure

Using these VMIs, you can spin up a GPU-accelerated Azure compute VM instance in minutes with a pre-installed Ubuntu OS, GPU driver, Docker and NVIDIA container toolkit.

image1

Oracle Cloud

NVIDIA VMIs are available on Oracle Cloud marketplace: NVIDIA VMI on Oracle Cloud

Using these VMIs, you can spin up a GPU-accelerated Oracle Cloud compute VM instance in minutes with a pre-installed Ubuntu OS, GPU driver, Docker and NVIDIA container toolkit.

image1

Alibaba Cloud

NVIDIA VMIs are available on Alibaba Cloud marketplace: NVIDIA VMI

Using these VMIs, you can spin up a GPU-accelerated compute VM instance on Alibaba Cloud international or China region in minutes with a pre-installed Ubuntu OS, GPU driver, Docker and NVIDIA container toolkit.

How to use entities in this collection?

This collection contains a sample container and Jupyter notebook for you to validate & get started using NVIDIA GPU-Optimized VMI on your favorite cloud platform. Provision a GPU compute instance on your cloud provider as mentioned above while selecting NVIDIA GPU-Optimized VMI from the cloud marketplace. Once you SSH into the instance, follow the steps to run the Jupyter notebook and/or the container.

Usage

Most of the containers hosted on NGC can run on NVIDIA VMIs in the cloud seamlessly. The following are few examples attached with this collection:

Fashion-MNIST Example with the NVIDIA TensorFlow Container

  1. Login to your cloud instance running the NVIDIA VMI
  2. Navigate to the ‘Entities’ tab of this collection
  3. Select the TensorFlow container and ‘pull the tag’ from top right corner
  4. Paste the docker pull command and pull the container on your instance
  5. Run the TensorFlow docker container with the following command (Please insert appropriate tag):
    docker run -it --gpus all  -p 8888:8888  --network=host nvcr.io/nvidia/tensorflow:<tag>
    
  6. Copy the Fashion-MNIST example Jupyter notebook inside the container (Click on the notebook -> go to file browser -> click on … and copy and paste the wget command in the container)
  7. Run Jupyter notebook server:
    jupyter lab --ip=0.0.0.0 --port=8888 --allow-root
    
  8. Open the notebook in your favorite browser

There you have it! You can start training or fine-tuning your Fashion MNIST model right away using the TensorFlow container on your favorite Cloud instance. Similarly, you can run any of the containers from NGC seamlessly on cloud instances using NVIDIA VMIs.

Resources