Linux / arm64
Linux / amd64
NVIDIA co-founded Project MONAI, the Medical Open Network for AI, with the world’s leading academic medical centers to establish an inclusive community of AI researchers to develop and exchange best practices for AI in healthcare imaging across academia and enterprise researchers.
MONAI is the domain-specific, open-source Medical AI framework that drives research breakthroughs and accelerates AI into clinical impact. MONAI unlocks the power of medical data to build deep learning models for medical AI workflows. MONAI provides the essential domain-specific tools from data labeling to model training, making it easy to develop, reproduce and standardize medical AI lifecycles.
MONAI Enterprise is NVIDIA’s offering for enterprise-grade use of MONAI with a NVIDIA AI Enterprise (NVAIE) license . MONAI Enterprise on NVAIE offers the MONAI Toolkit container, which provides enterprise developers and researchers with a secure, scalable workflow to develop medical imaging AI.
MONAI Toolkit, the first offering of MONAI Enterprise, includes:
Using the MONAI Toolkit Container requires the host system to have the following installed:
Version 2.x of the MONAI Toolkit Container Image uses PyTorch version 24.03 series as its base.
For a full list of suported software and specific versions that come packaged with the frameworks based on the container image, see the Framework Containers Support Matrix and the NVIDIA Container Toolkit Documentation.
No other installation, compilation, or dependency management is required. It is not necessary to install the NVIDIA CUDA Toolkit.
MONAI Toolkit is supported on GPUs such as H100, A100, A40, A30, A10, L40, and more.
For a complete list of supported hardware, please see the NVIDIA AI Enterprise Product Support Matrix
For more details usage information, see the MONAI Toolkit docs.
To start with a local host system, the MONAI Toolkit JupyterLab instance can be started with a ready-to-open website link with:
docker run --gpus all -it --rm --ipc=host --net=host nvcr.io/nvidia/clara/monai-toolkit
After the JupyterLab App is started, follow the onscreen instruction and open the URL in a web browser.
Note: By default, the container use the host system's all GPU resources, networking and inter-process communication (IPC) namespace. Multiple notebooks require a large shared memory size for the container to run comprehensive workflows. For more information, please refer to changing the shared memory segment size
To run the MONAI Toolkit container with the bash shell, issue the command below to start the prebuilt container:
docker run --gpus all -it --rm --ipc=host --net=host \
nvcr.io/nvidia/clara/monai-toolkit \
/bin/bash
MONAI may use shared memory to share data between processes. For example, if you use Torch multiprocessing for multi-threaded data loaders, the default shared memory segment size that the container runs with may not be enough. Therefore, you should increase the shared memory size by issuing either:
--ipc=host
or
--shm-size=
Users can access the JupyterLab instance by simply typing the host machine's URL or IP address in the web browser with the port number (default: 8888).
A token may be required to log in for the first time.
Users can find the token in the system which hosts the MONAI Toolkit container by looking for the code after /?token=
on the screen.
JupyterLab is started on port 8888 by default. If the user wants to assign another port, the JupyterLab instance can be started by setting the JUPYTER_PORT
environment variable:
-e JUPYTER_PORT=8900
For example:
docker run --gpus all -it --rm --ipc=host --net=host \
-e JUPYTER_PORT=8900 \
nvcr.io/nvidia/clara/monai-toolkit
Note: More details about running docker commands are explained in the Running A Container chapter in the NVIDIA Containers For Deep Learning Frameworks User’s Guide and specify the registry, repository, and tags.
To mount a custom data directory, the users can use -v
to mount the drive(s) and override the default data directory environment variable MONAI_DATA_DIRECTORY
used by many notebooks.
For example:
docker run --gpus all -it --rm --ipc=host --net=host \
-v ~/workspace:/workspace \
-e MONAI_DATA_DIRECTORY=/workspace/data \
nvcr.io/nvidia/clara/monai-toolkit
/opt/docker/runtoolkit.sh
provides an entry point for the user to configure Jupyter Lab with the default settings.
However, it cannot cover all use scenarios and APIs of Jupyter.
The user can run the Jupyter Lab command in the MONAI Toolkit container and configure the instance simply by:
docker run --gpus all -it --rm --ipc=host --net=host nvcr.io/nvidia/clara/monai-toolkit jupyter lab
Once in the JupyterLab environment, the welcome.md
file will launch by default. You can use this file to help navigate the Chapters within the Toolkit.
We recommend going sequentially through each chapter and focusing on the specific types of tasks that you're working on (e.g., Radiology, Pathology, or Computer-Assisted Intervention). If you're looking for a particular workflow, find the one that best suits where you're at in your journey below.
If you are at the start of your journey and looking to understand how to speed up your image annotation process, check out Chapter 2: Using MONAI Label.
If you already have an annotated dataset and want to get started training with MONAI or integrate MONAI into your existing PyTorch Training Loop, take a Chapter 3: Using MONAI Core.
If you are interested in MONAI and Federated Learning working together, look at Chapter 4: MONAI Federated Learning.
If you are interested in some advanced topics and looking to compare benchmarks, interested in performance profiling, or looking for some researcher best practices with MONAI, check out Chapter 5: Performance and Benchmarking.
You can find links to documentation for the MONAI Toolkit or specific frameworks below.
Start contributing to the MONAI by submitting a bug, requesting a feature, or starting a discussion today!
Are you having trouble? Trying to find out where to contribute? Want to share your cool project? Join our MONAI Slack and chat with the community.
By pulling and using the MONAI Toolkit container and models, you accept the terms and conditions of the NVIDIA AI Product Agreement. Register for a free evaluation license to try NVIDIA AI Enterprise on your compatible, on-premises system or on the cloud!