NGC | Catalog
Welcome Guest

cuOpt

For copy image paths and more information, please view on a desktop device.
Logo for cuOpt

Description

NVIDIA cuOpt™ combinatorial optimization software

Publisher

NVIDIA

Latest Tag

22.08

Modified

September 20, 2022

Compressed Size

2.97 GB

Multinode Support

No

Multi-Arch Support

No

22.08 (Latest) Scan Results

Linux / amd64



NVIDIA cuOpt™ is provided as both a Python SDK and a RESTful microservice. The container provided can be run as primarily the Python SDK, the RESTful service, or both.

  • Python SDK : The cuOpt Python SDK provides a very flexible development environment for using the cuOpt API directly to build your own services and applications.
  • RESTful Microservice : The cuOpt microservice leverages OpenAPI standards serving endpoints running on port 5000 (by default) to accept optimization input data and return optimized routing solutions. This service handles asynchronous data collection, input/output data preprocessing, and state management.

Running NVIDIA cuOpt

Setup

Before you can run the NVIDIA cuOpt container, your Docker environment must support NVIDIA GPUs. To enable GPU support reference the Using Native GPU Support section in the NVIDIA Containers And Frameworks User Guide, then follow the instructions below. For more information about using NGC, refer to the NGC Container User Guide.

  • If you have not generated an API Key, you can generate it by going to the Setup option in your profile and choose Get API Key. Store this or generate a new one next time. More information can be found here.

  • If you haven't logged in through docker or you changed your API Key, provide your NGC credentials through docker login using following command:

    sudo docker login nvcr.io
    Username: $oauthtoken
    Password: <my-api-key>
    

    Note: username is $oauthtoken and password is your API Key.

Procedure

  1. Within the Select a tag dropdown locate the container image release that you want to run.

  2. Click Copy Image Path button to copy the container image path.

  3. Open a command prompt and use the image path to pull the docker image. Ensure the pull completes successfully before proceeding to the next step.

    docker pull <IMAGE-PATH>
    
  4. Run the container image.

  • To Run cuOpt as a RESTful Service

    • If you have Docker 19.03 or later, a typical command to launch the container is:

      docker run -it --gpus all --rm --network=host <ImageID>
      
    • If you have Docker 19.02 or earlier, a typical command to launch the container is:

      nvidia-docker run -it --gpus all --rm --network=host <ImageID>
      
    • (Optional) If running example notebooks found on GitHub, ensure the microservice notebooks reference the relevante IP and port. By default on a local environment this will be 127.0.0.1 at port 5000

    • (Optional) Once the container is running, local static and interactive documentation can be found here:

  • To run cuOpt as a Python SDK

    • If you have Docker 19.03 or later, a typical command to launch the container is:

      docker run -it --gpus all --rm --network=host <ImageID> /bin/bash
      
    • If you have Docker 19.02 or earlier, a typical command to launch the container is:

      nvidia-docker run -it --gpus all --rm --network=host <ImageID> /bin/bash
      
    • (Optional) There are sample notebooks available in conatiner itself to try, and can be accessed as follows

      docker run --gpus all -it --rm --network=host <ImageID> jupyter-notebook --notebook-dir /home/cuopt_user/notebooks
      
    • (Optional) If running example notebooks found on GitHub, after cloning the repo, ensure that the Python notebooks are mounted to the running container.

      docker run --gpus all -it --rm --network=host -v <FullPathToNotebooks>:/notebooks --user 1000:1000 <ImageID> jupyter-notebook --notebook-dir /notebooks
      

Where:

  • -it means run in interactive mode
  • --rm will delete the container when finished
  • --network dictates the networking mode for the container
  • -v to mount volume from local to docker container

Additional setup instructions can be found here

Documentation

Examples and Resources

By pulling and using the container, you accept the terms and conditions of this End User License Agreement.