NGC | Catalog
CatalogModelsPointPillarNet

PointPillarNet

Logo for PointPillarNet
Description
Model to detect one or more objects from a LIDAR point cloud file and return 3D bounding boxes.
Publisher
NVIDIA
Latest Version
trainable_v1.1
Modified
February 27, 2024
Size
128.96 MB

PointPillars Model Card

Model Overview

The models described in this card detect one or more objects from a LIDAR point cloud file and return a 3D bounding box around each object. These pre-trained PointPillars models are trained on a point cloud dataset collected by a solid state LIDAR.

Model Architecture

These models are based on PointPillars architecture in NVIDIA TAO Toolkit.

Training

The training algorithm optimizes the network to minimize the localization and confidence loss for the objects.

Training Data

The PointPillars models were trained on a proprietary LIDAR point cloud dataset.

Performance

Evaluation Data

The evaluation dataset for the PointPillars models is obtained through the same way as training dataset.

Methodology and KPI

The key performance indicator is the mean average precision(mAP) object detection in 3D or Bird's-Eye View(BEV). The KPI for the evaluation data are reported in the table below.

Model Dataset mAP BEV/3D
pointpillars_trainable.tlt proprietary dataset 65.2167%/51.7159%
pointpillars_deployable.etlt proprietary dataset 66.6860%/52.8530%

Real-time Inference Performance

The inference is run on the provided deployable models at FP16 precision. The Jetson devices run at Max-N configuration for maximum system performance. The performance shown below is only for inference of the usa deployable(pruned) model. As a comparison, we also show the inference performance of the unpruned model(not available here).

Model Device Precision Batch_size FPS
Pruned Xavier FP16 1 39
Unpruned Xavier FP16 1 31

How to use this model

These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, or TensorRT.

Primary use case intended for these models is detecting objects in a point cloud file.

Totally there are two models provided:

  • pointpillars_trainable.tlt
  • pointpillars_deployable.etlt

The trainable models are intended for training and fine-tuning using TAO Toolkit along with the user's dataset of point cloud. High fidelity models can be trained and adapted to the use case.

The usa deployable models are intended for easy deployment to the edge using TensorRT.

The trainable and deployable models are encrypted and can be decrypted with the following key:

  • Model load key: tlt_encode

Please make sure to use this as the key for all TAO commands that require a model load key.

Input

The models has 2 inputs.

  • points: The points in a point cloud file. It has the shape (N, P, 4), where N is the batch size and P is the maximum number of points in a point cloud file in the dataset. 4 is simply the number of features per point.
  • num_points: The actual number of points in each point cloud file. It has the shape (N,), where N is the batch size as above.

Output

Category labels(Vehicle, Pedestrian, Cyclist) and 3D bounding-box coordinates for each detected objects in the input point cloud file.

Instructions to use trainable model with TAO Toolkit

In order to use these models as a pre-trained model for transfer learning, please use the snippet below as template for the OPTIMIZATION component of the config file to train a PointPillars model. For more information on the config file, please refer to the TAO Toolkit User Guide.

  1. Set PRETRAINED_MODEL_PATH in OPTIMIZATION parameter
PRETRAINED_MODEL_PATH: "/path/to/the/model.tlt"

Instructions to deploy these models with TensorRT

PointPillars model can be deployed in TensorRT with the TensorRT C++ sample with TensorRT 8.2.

As a dependency, the TensorRT sample requires the TensorRT OSS 22.02 to be installed.

Detailed steps are shown below.

  • Install TensorRT 8.2 or use pre-installed one if it is already installed.

  • Install TensorRT OSS 22.02.

git clone -b 22.02 https://github.com/NVIDIA/TensorRT.git TensorRT
cd TensorRT
git submodule update --init --recursive
mkdir -p build && cd build
cmake .. -DCUDA_VERSION=$CUDA_VERSION -DGPU_ARCHS=$GPU_ARCHS
make nvinfer_plugin -j$(nproc)
cp libnvinfer_plugin.so.8.2.* /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.2.3
cp libnvinfer_plugin_static.a /usr/lib/x86_64-linux-gnu/libnvinfer_plugin_static.a
  • Train the model in TAO Toolkit and export to the .etlt model.

  • Generate TensorRT engine on target device with tao-converter.

tao-converter  -k $KEY  \
               -e $USER_EXPERIMENT_DIR/trt.fp16.engine \
               -p points,1x204800x4,1x204800x4,1x204800x4 \
               -p num_points,1,1,1 \
               -t fp16 \
               pointpillars_deployable.etlt
  • Clone, build and run the C++ sample.
cd ~
git clone https://github.com/NVIDIA-AI-IOT/tao_toolkit_recipes.git
cd tao_toolkit_recipes
git lfs pull
cd tao_pointpillars/tensorrt_sample/test
mkdir build
cd build
cmake .. -DCUDA_VERSION=<CUDA_VERSION>
make -j8
./pointpillars -e /path/to/tensorrt/engine -l ../../data/102.bin  -t 0.01 -c Vehicle,Pedestrain,Cyclist -n 4096 -p -d fp16

Limitations

TensorRT inference batch size

Currently the TensorRT engine of PointPillars model can only run at batch size 1.

License

License to use these models is covered by the Model EULA. By downloading the trainable or deployable version of the model, you accept the terms and conditions of these licenses.

Technical blogs

Suggested reading

Ethical AI

NVIDIA PointPillars model detects 3D objects in a point cloud file.

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.