MONAI Toolkit

MONAI Toolkit

Logo for MONAI Toolkit
MONAI Toolkit is a one-stop, development sandbox environment for researchers, data scientists, developers, and clinical teams.
Latest Tag
May 1, 2024
Compressed Size
9.95 GB
Multinode Support
Multi-Arch Support
1.1-2 (Latest) Security Scan Results

Linux / amd64

Sorry, your browser does not support inline SVG.

What is the MONAI Toolkit?

NVIDIA co-founded Project MONAI, the Medical Open Network for AI, with the world’s leading academic medical centers to establish an inclusive community of AI researchers to develop and exchange best practices for AI in healthcare imaging across academia and enterprise researchers.

MONAI is the domain-specific, open-source Medical AI framework that drives research breakthroughs and accelerates AI into clinical impact. MONAI unlocks the power of medical data to build deep learning models for medical AI workflows. MONAI provides the essential domain-specific tools from data labeling to model training, making it easy to develop, reproduce and standardize medical AI lifecycles.

MONAI Enterprise is NVIDIA’s offering for enterprise-grade use of MONAI with a NVIDIA AI Enterprise (NVAIE) license . MONAI Enterprise on NVAIE offers the MONAI Toolkit container, which provides enterprise developers and researchers with a secure, scalable workflow to develop medical imaging AI.

MONAI Toolkit, the first offering of MONAI Enterprise, includes:

  • MONAI Label: An intelligent labeling and learning tool with active learning that reduces data labeling costs by 75%
  • MONAI Core: A training framework to build robust AI models with self-supervised learning, federated learning, and Auto3D segmentation.
    • With federated learning, APIs algorithms built with NVIDIA FLARE, MONAI can run on any federated learning platform.
    • Auto3DSeg is domain-specialized AutoML for 3D segmentation, accelerating the development of medical imaging models and maximizing researcher productivity and throughput. Developers can get started with 1-5 lines of code, reducing training time from weeks/months to 2 days.
  • MONAI Model Zoo: A curated library of 15 pre-trained models (CT, MR, Pathology, Endoscopy) that allows data scientists and clinical researchers to jumpstart AI development.
  • Curated Jupyter notebooks and tutorial resources to ease the onboarding process.

System Requirements

Installation Prerequisites

Using the MONAI Toolkit Container requires the host system to have the following installed:

The MONAI Toolkit Base Container Image uses version 23.03.

For a full list of suported software and specific versions that come packaged with the frameworks based on the container image, see the Framework Containers Support Matrix and the NVIDIA Container Toolkit Documentation.

No other installation, compilation, or dependency management is required. It is not necessary to install the NVIDIA CUDA Toolkit.

Hardware Requirements

MONAI Toolkit is supported on GPUs such as H100, A100, A40, A30, A10, L40, and more.

For a complete list of supported hardware, please see the NVIDIA AI Enterprise Product Support Matrix


For more details usage information, see the MONAI Toolkit docs.

Running the MONAI Toolkit JupyterLab Instance

To start with a local host system, the MONAI Toolkit JupyterLab instance can be started with a ready-to-open website link with:

docker run --gpus all -it --rm --ipc=host --net=host

After the JupyterLab App is started, follow the onscreen instruction and open the URL in a web browser.

Note: By default, the container use the host system's all GPU resources, networking and inter-process communication (IPC) namespace. Multiple notebooks require a large shared memory size for the container to run comprehensive workflows. For more information, please refer to changing the shared memory segment size

Running the MONAI Toolkit in Interactive Bash

To run the MONAI Toolkit container with the bash shell, issue the command below to start the prebuilt container:

docker run  --gpus all -it --rm --ipc=host --net=host \

Changing Shared Memory Segment Size

MONAI may use shared memory to share data between processes. For example, if you use Torch multiprocessing for multi-threaded data loaders, the default shared memory segment size that the container runs with may not be enough. Therefore, you should increase the shared memory size by issuing either:




Access the JupyterLab Remotely

Users can access the JupyterLab instance by simply typing the host machine's URL or IP address in the web browser with the port number (default: 8888). A token may be required to log in for the first time. Users can find the token in the system which hosts the MONAI Toolkit container by looking for the code after /?token= on the screen.

Running the JupyterLab Instance on a Remote Host Machine

JupyterLab is started on port 8888 by default. If the user wants to assign another port, the JupyterLab instance can be started by setting the JUPYTER_PORT environment variable:


For example:

docker run  --gpus all -it --rm --ipc=host --net=host \
            -e JUPYTER_PORT=8900 \

Note: More details about running docker commands are explained in the Running A Container chapter in the NVIDIA Containers For Deep Learning Frameworks User’s Guide and specify the registry, repository, and tags.

Mount Custom Data Directory

To mount a custom data directory, the users can use -v to mount the drive(s) and override the default data directory environment variable MONAI_DATA_DIRECTORY used by many notebooks. For example:

docker run  --gpus all -it --rm --ipc=host --net=host \
            -v ~/workspace:/workspace \
            -e MONAI_DATA_DIRECTORY=/workspace/data \

Configure JupyterLab

/opt/docker/ provides an entry point for the user to configure Jupyter Lab with the default settings. However, it cannot cover all use scenarios and APIs of Jupyter. The user can run the Jupyter Lab command in the MONAI Toolkit container and configure the instance simply by:

docker run --gpus all -it --rm --ipc=host --net=host jupyter lab

Once in the JupyterLab environment, the file will launch by default. You can use this file to help navigate the Chapters within the Toolkit.

We recommend going sequentially through each chapter and focusing on the specific types of tasks that you're working on (e.g., Radiology, Pathology, or Computer-Assisted Intervention). If you're looking for a particular workflow, find the one that best suits where you're at in your journey below.

If you are at the start of your journey and looking to understand how to speed up your image annotation process, check out Chapter 2: Using MONAI Label.

If you already have an annotated dataset and want to get started training with MONAI or integrate MONAI into your existing PyTorch Training Loop, take a Chapter 3: Using MONAI Core.

If you are interested in MONAI and Federated Learning working together, look at Chapter 4: MONAI Federated Learning.

If you are interested in some advanced topics and looking to compare benchmarks, interested in performance profiling, or looking for some researcher best practices with MONAI, check out Chapter 5: Performance and Benchmarking.

Additional Resources

You can find links to documentation for the MONAI Toolkit or specific frameworks below.

Join the MONAI Community


Start contributing to the MONAI by submitting a bug, requesting a feature, or starting a discussion today!


Are you having trouble? Trying to find out where to contribute? Want to share your cool project? Join our MONAI Slack and chat with the community.


By pulling and using the MONAI Toolkit container and models, you accept the terms and conditions of the NVIDIA AI Product Agreement. Register for a free evaluation license to try NVIDIA AI Enterprise on your compatible, on-premises system or on the cloud!