Linux / amd64
This container provides a demonstration of GPU Accelerated Data Science workflows using RAPIDS.
The RAPIDS suite of open source software libraries and APIs gives you the ability to execute end-to-end data science and analytics pipelines entirely on GPUs. Licensed under Apache 2.0, RAPIDS is incubated by NVIDIA® based on extensive hardware and data science science experience. RAPIDS utilizes NVIDIA CUDA® primitives for low-level compute optimization, and exposes GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.
RAPIDS also focuses on common data preparation tasks for analytics and data science. This includes a familiar dataframe API that integrates with a variety of machine learning algorithms for end-to-end pipeline accelerations without paying typical serialization costs. RAPIDS also includes support for multi-node, multi-GPU deployments, enabling vastly accelerated processing and training on much larger dataset sizes.
Please review the following resources:
Getting started with the application is pretty straightforward with nvidia-docker.
This image contains the complete RAPIDS Jupyter Lab environment and tutorial.
1. Download the container from NGC
docker pull nvcr.io/nvidia/rapids_ml_workshop:20.08
2. Run the notebook server
docker run --gpus all --rm -it -p 8888:8888 nvcr.io/nvidia/rapids_ml_workshop:20.08
Note: Depending on your docker version you may have to use ‘docker run --runtime=nvidia’ or remove ‘--gpus all’
3. Connect to notebook server
Jupyter Lab will be available on port 8888!
e.g. http://127.0.0.1:8888 if running on a local machine
(or first available port after that, 8889, 8890 etc if 8888 is occupied - see command output)
4. Run the notebooks