PointPillarNet detects objects from a LIDAR point cloud file. This model is ready for commercial use.
Architecture Type: Convolution Neural Network (CNN)
Network Architecture: PointPillars Architecture
Input Type(s): Point Cloud File
Input Format(s): Lidar
Input Parameters: Points, Num_Points
Other Properties Related to Input:
Output Type(s): Label(s), Bounding-Box(es), Confidence Scores
Output Format: Label: Text String(s); Bounding Box: (x-coordinate, y-coordinate, z-coordinate, width, height, depth), Confidence Scores: Floating Point
Other Properties Related to Output: Category Label(s): (Vehicle, Pedestrian, Cyclist); Bounding Box Coordinates; Confidence Scores
Runtime Engine(s):
Supported Hardware Architecture(s):
Supported Operating System(s):
Data Collection Method by dataset:
Labeling Method by dataset:
Properties:
7481 training images of 80,256 labeled objects from a proprietary LIDAR point cloud dataset of vehicles, pedestrians, and cyclists, and other elements of road scenery collected by a solid state LIDAR.
The key performance indicator is the mean average precision(mAP) object detection in 3D or Bird's-Eye View(BEV). The KPI for the evaluation data are reported in the table below.
Model | Dataset | mAP BEV/3D |
---|---|---|
pointpillars_trainable.tlt |
proprietary dataset | 65.2167%/51.7159% |
pointpillars_deployable.etlt |
proprietary dataset | 66.6860%/52.8530% |
Data Collection Method by dataset:
Labeling Method by dataset:
Properties:
7581 training images of 80,256 labeled objects from a proprietary LIDAR point cloud dataset of vehicles, pedestrians, and cyclists, and other elements of road scenery collected by a solid state LIDAR.
Engine: Tensor(RT)
Test Hardware:
The inference is run on the provided deployable models at FP16 precision. The Jetson devices run at Max-N configuration for maximum system performance. The performance shown below is only for inference of the usa deployable(pruned) model. As a comparison, we also show the inference performance of the unpruned model(not available here).
Model | Device | Precision | Batch_size | FPS |
---|---|---|---|---|
Pruned | Xavier | FP16 | 1 | 39 |
Unpruned | Xavier | FP16 | 1 | 31 |
These models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, or TensorRT.
Primary use case intended for these models is detecting objects in a point cloud file.
Totally there are two models provided:
pointpillars_trainable.tlt
pointpillars_deployable.etlt
The trainable
models are intended for training and fine-tuning using TAO Toolkit along with the user's dataset of point cloud. High fidelity models can be trained and adapted to the use case.
The usa deployable
models are intended for easy deployment to the edge using TensorRT.
The trainable
and deployable
models are encrypted and can be decrypted with the following key:
tlt_encode
Please make sure to use this as the key for all TAO commands that require a model load key.
In order to use these models as a pre-trained model for transfer learning, please use the snippet below as template for the OPTIMIZATION
component of the config file to train a PointPillars model. For more information on the config file, please refer to the TAO Toolkit User Guide.
PRETRAINED_MODEL_PATH
in OPTIMIZATION
parameter PRETRAINED_MODEL_PATH: "/path/to/the/model.tlt"
PointPillars model can be deployed in TensorRT with the TensorRT C++ sample with TensorRT 8.2.
As a dependency, the TensorRT sample requires the TensorRT OSS 22.02 to be installed.
Detailed steps are shown below.
Install TensorRT 8.2 or use pre-installed one if it is already installed.
Install TensorRT OSS 22.02.
git clone -b 22.02 https://github.com/NVIDIA/TensorRT.git TensorRT
cd TensorRT
git submodule update --init --recursive
mkdir -p build && cd build
cmake .. -DCUDA_VERSION=$CUDA_VERSION -DGPU_ARCHS=$GPU_ARCHS
make nvinfer_plugin -j$(nproc)
cp libnvinfer_plugin.so.8.2.* /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.2.3
cp libnvinfer_plugin_static.a /usr/lib/x86_64-linux-gnu/libnvinfer_plugin_static.a
Train the model in TAO Toolkit and export to the .etlt
model.
Generate TensorRT engine on target device with tao-converter
.
tao-converter -k $KEY \
-e $USER_EXPERIMENT_DIR/trt.fp16.engine \
-p points,1x204800x4,1x204800x4,1x204800x4 \
-p num_points,1,1,1 \
-t fp16 \
pointpillars_deployable.etlt
cd ~
git clone https://github.com/NVIDIA-AI-IOT/tao_toolkit_recipes.git
cd tao_toolkit_recipes
git lfs pull
cd tao_pointpillars/tensorrt_sample/test
mkdir build
cd build
cmake .. -DCUDA_VERSION=<CUDA_VERSION>
make -j8
./pointpillars -e /path/to/tensorrt/engine -l ../../data/102.bin -t 0.01 -c Vehicle,Pedestrain,Cyclist -n 4096 -p -d fp16
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Promise and the Explainability, Bias, Safety & Security, and Privacy Subcards.