Linux / arm64
Linux / amd64
Deep Graph Library (DGL) is a Python package built for the implementation and training of graph neural networks on top of existing DL frameworks. NGC Containers are the easiest way to get started with DGL. The DGL NGC Container is built with the latest versions of Deep Graph Library (DGL), PyTorch, and their dependencies.
The DGL NGC Container is optimized for GPU acceleration and contains a validated set of libraries that enable and optimize GPU performance. This container also contains software for accelerating data sampling and ETL (cuGraph, NVIDIA Rapids), Training (cuDNN, NCCL), and Inference (TensorRT) workloads.
There are two main prerequisites for DGL containers:
Use the following commands to run the container, where <xx.xx> is the container version. For example, 23.07 for July 2023 release:
docker run --gpus all -it --rm nvcr.io/nvidia/dgl:<xx.xx>-py3
To start JupyterLab from the container and view all the included examples:
docker run --gpus all -it --rm -p 8888:8888 nvcr.io/nvidia/dgl:<xx.xx>-py3 bash -c 'source /usr/local/nvm/nvm.sh && jupyter lab'
You might want to pull in your own data or persist code outside the DGL container. The easiest method is to mount one or more host directories as Docker bind mounts so your code changes persist.
We also have a GraphSAGE training example:
cd examples/graphsage
python3 train_full.py --dataset cora --gpu 0
If you are looking for examples from DGL, you can find them in /opt/dgl/dgl-source/
Our documentation and release notes can be found here.
NVIDIA's platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model's developer to ensure: