NGC Catalog
CLASSIC
Welcome Guest
Containers
Aerial CUDA-Accelerated RAN

Aerial CUDA-Accelerated RAN

For copy image paths and more information, please view on a desktop device.
Logo for Aerial CUDA-Accelerated RAN
Associated Products
Description
NVIDIA Aerial™ CUDA-Accelerated RAN is an application framework for building commercial-grade, software-defined, GPU-accelerated, cloud-native 5G/6G networks.
Publisher
NVIDIA
Latest Tag
25-1-cubb
Modified
April 17, 2025
Compressed Size
20.99 GB
Multinode Support
No
Multi-Arch Support
Yes
25-1-cubb (Latest) Security Scan Results
No results available.

What is NVIDIA Aerial™ CUDA-Accelerated RAN?

NVIDIA Aerial™ CUDA-Accelerated RAN is an application framework for building commercial-grade, software-defined, GPU-accelerated, cloud-native 5G/6G networks. It enables a fully cloud-native virtual 5G RAN solution to support a wide range of next-generation edge AI and RAN services using commercial off-the-shelf (COTS) servers.

The platform supports full inline GPU acceleration of layer 1 (cuPHY) and GPU accelerated functions of layer 2 (cuMAC) of the 5G/6G stack. It supports a full stack framework for a gNB integration L2/L3 (MAC, RLC, PDCP), along with manageability and orchestration. Aerial CUDA-Accelerated RAN also supports non 5G signal processing use cases.

The NVIDIA Aerial™ CUDA-Accelerated RAN package simplifies building programmable and scalable software-defined 5G vRAN using COTS servers with NVIDIA GPUs and has been deployed in commercial and research networks.

What is in this Container?

The NVIDIA Aerial™ CUDA-Accelerated RAN container includes the source code and a single Docker container comprised of:

  • Aerial cuPHY:  Aerial cuPHY is a cloud-native, software-defined platform optimized to run 5G/6G-compatible gNB physical layer (L1/PHY) workloads on NVIDIA DPU/NIC and GPU hardware.
  • Aerial cuMAC:  Aerial cuMAC, a Layer 2 MAC scheduler acceleration library, is developed to improve spectral efficiency by introducing a multi-cell scheduler with enhanced algorithms within Layer 2 of the RAN protocol stack.
  • pyAerial:  pyAerial is a Python library of physical layer components that can be used as part of the workflow in taking a design from simulation to real-time operation.
  • Aerial Data Lake:  Aerial Data Lake can be used in conjunction with the NVIDIA pyAerial library to generate training data for layer-1 pipelines built on neural networks.
  • Aerial TestMac:  Aerial TestMAC functions as the L2/L1 interface, which schedules packets according to a predefined launch pattern.
  • Aerial RU Emulator:  Aerial RU emulator is a basic implementation of ORAN FH interface. Its functions include verifying the timing of FH packets, checking the integrity of DL IQ samples and scheduling the transmission of UL IQ samples.
  • Aerial RAN CoLab Over-the-Air (ARC-OTA):  The NVIDIA Aerial RAN CoLab Over-the-Air is a full-featured platform targeted for next generation wireless evolution that eases developer onboarding and algorithm development in real time networks.

Prerequisites

The following software components are required on the host:

  • CUDA 12.8 driver (570.124.06)
  • GDRCopy 2.4.1
  • Nvidia container toolkit:
    • https://github.com/NVIDIA/nvidia-docker
  • Docker:
    • https://docs.docker.com/install/linux/docker-ce/ubuntu/

Supported GPU and NIC combination: Grace Hopper MGX + BF3

Get the container

Log in with docker (Introduce credentials as detailed here)

sudo docker login nvcr.io

Pull the cuBB container with the following command:

sudo docker pull nvcr.io/nvidia/aerial/aerial-cuda-accelerated-ran:25-1-cubb

Run the container

Run cuBB container with the following commands:

sudo docker run --restart unless-stopped -dP --gpus all --network host --shm-size=4096m --privileged -it --device=/dev/gdrdrv:/dev/gdrdrv -v /lib/modules:/lib/modules -v /dev/hugepages:/dev/hugepages -v ~/share:/opt/cuBB/share --userns=host --ipc=host -v /var/log/aerial:/var/log/aerial --name cuBB nvcr.io/nvidia/aerial/aerial-cuda-accelerated-ran:25-1-cubb

sudo docker exec -it cuBB /bin/bash

Note that the --gpus option requires docker version 19.03 or newer. Check the docker version with command

$ docker --version

Please use version 19.03. If you need to use an older docker version, leave out the "--gpus all" part of the command.

License

End User License Agreement is included with the product. By pulling and using the Aerial Aerial™ CUDA-Accelerated RAN collection or containers, you accept the terms and conditions of these licenses.

Documentation

Please see the release notes, installation guide and quickstart guide at NVIDIA Docs Hub:

Technical Support

Use the NVIDIA Aerial Developer Forum for questions regarding this software. You must have a developer account and be signed in to access the forum.