NGC | Catalog

cuOpt

For copy image paths and more information, please view on a desktop device.
Logo for cuOpt

Features

Description

NVIDIA cuOpt™ combinatorial optimization software

Publisher

NVIDIA

Latest Tag

22.12

Modified

April 5, 2023

Compressed Size

6.64 GB

Multinode Support

No

Multi-Arch Support

No

22.12 (Latest) Security Scan Results

Linux / amd64



NVIDIA cuOpt™ is provided as both a Python SDK and microservice. The container provided can be run as primarily the Python SDK, the service, or both.

  • cuOpt Python SDK : The cuOpt Python SDK provides a very flexible development environment for using the cuOpt API directly to build your own services and applications.
  • cuOpt Microservice : The cuOpt microservice leverages OpenAPI standards serving endpoints running on port 5000 (by default) to accept optimization input data and return optimized routing solutions. This service handles asynchronous data collection, input/output data preprocessing, and state management.

Running NVIDIA cuOpt

Setup

Before you can run the NVIDIA cuOpt container, your Docker environment must support NVIDIA GPUs. To enable GPU support reference the Using Native GPU Support section in the NVIDIA Containers And Frameworks User Guide, then follow the instructions below. For more information about using NGC, refer to the NGC Container User Guide.

  • If you have not generated an API Key, you can generate it by going to the Setup option in your profile and choose Get API Key. Store this or generate a new one next time. More information can be found here.

  • If you haven't logged in through docker or you changed your API Key, provide your NGC credentials through docker login using following command:

    sudo docker login nvcr.io
    Username: $oauthtoken
    Password: <my-api-key>
    

    Note: username is $oauthtoken and password is your API Key.

Procedure

  1. Within the Select a tag dropdown locate the container image release that you want to run.

  2. Click Copy Image Path button to copy the container image path.

  3. Open a command prompt and use the image path to pull the docker image. Ensure the pull completes successfully before proceeding to the next step.

    docker pull <IMAGE-PATH>
    
  4. Run the container image.

  • To Run cuOpt as a RESTful Service

    • If you have Docker 19.03 or later, a typical command to launch the container is:

      docker run -it --gpus all --rm --network=host <ImageID>
      
      • If you are running on WSL, you would need explicit port mapping:

        docker run -it -p 8000:5000 --gpus all --rm <ImageID>
        
    • If you have Docker 19.02 or earlier, a typical command to launch the container is:

      nvidia-docker run -it --gpus all --rm --network=host <ImageID>
      
      • If you are running on WSL, you would need explicit port mapping:

        nvidia-docker run -it -p 8000:5000 --gpus all --rm <ImageID>
        
    • (Optional) If running example notebooks found on GitHub, ensure the microservice notebooks reference the relevante IP and port. By default on a local environment this will be 127.0.0.1 at port 5000. For WSL it would be at port 8000.

    • (Optional) Once the container is running, local static and interactive documentation can be found here:

  • To run cuOpt as a Python SDK

    • If you have Docker 19.03 or later, a typical command to launch the container is:

      docker run -it --gpus all --rm --network=host <ImageID> /bin/bash
      
    • If you have Docker 19.02 or earlier, a typical command to launch the container is:

      nvidia-docker run -it --gpus all --rm --network=host <ImageID> /bin/bash
      
    • (Optional) There are sample notebooks available in conatiner itself to try, and can be accessed as follows

      docker run --gpus all -it --rm --network=host <ImageID> jupyter-notebook --notebook-dir /home/cuopt_user/notebooks
      
    • (Optional) If running example notebooks found on GitHub, after cloning the repo, ensure that the Python notebooks are mounted to the running container.

      docker run --gpus all -it --rm --network=host -v <FullPathToNotebooks>:/notebooks --user 1000:1000 <ImageID> jupyter-notebook --notebook-dir /notebooks
      

Where:

  • -it means run in interactive mode
  • --rm will delete the container when finished
  • --network dictates the networking mode for the container
  • -v to mount volume from local to docker container

Additional setup instructions can be found here

🐛 Bug Fixes

• The cloud scripts are reverted to use Ansible 6.0.0 in the cloud-native-stack installation to work-around the CNS install failure in Ansible 7.0.0.
• A bug is fixed in the update_task_location endpoint’s validation on the microserver side.

🚀 New Features

• Vehicle-dependent service times.
• Provision for limiting the amount of time a vehicle can work, including its travel time. This is analogous to maximum cost per vehicle.
• Enhanced objective functions minimize variance of route sizes and route service times.
• Task IDs for identifying tasks on the microserver.

🛠️ Improvements

• Allow infeasible solutions in local search and make them feasible through enhanced heuristics.
• Cycle finder ported to the GPU.
• Enhance validation for checking conflicts between break time windows and vehicle time windows.
• Improved heuristics for PDP use cases.
• New version check applied to cuOpt microserver.
• New tests for validating the cloud scripts.
• New microserver performance tests.

Documentation

Examples and Resources

By pulling and using the container, you accept the terms and conditions of this End User License Agreement.