NGC | Catalog
CatalogModelsOptical Character Detection

Optical Character Detection

For downloads and more information, please view on a desktop device.
Logo for Optical Character Detection


Network to detect characters in an image.



Latest Version



July 24, 2023


147.76 MB

OCDNet Model Card

Model Overview

The model described in this card is an optical characters detection network, which aims to detect text in images. Two trainable OCDNet models are provided. These are trained on Uber-Text dataset. There are also two deployable OCDNet models that are finetuned with the ICDAR2015 dataset.

Model Architecture

This model is based on a relatively sophisticated text detection network called DBNet. DBNet is a network architecture for real-time scene text detection with differentiable binarization. It aims to solve the problem of text localization and segmentation in natural images with complex backgrounds and various text shapes.


The training algorithm inserts the binarization operation into the segmentation network and jointly optimizes it so that the network can learn to separate foreground and background pixels more effectively. The binarization threshold is learned by minimizing the IoU loss between the predicted binary map and the ground truth binary map.

Training Data

The trainable models were trained on the Uber-Text dataset. The Uber-Text dataset contains street-level images collected from car mounted sensors and truths annotated by a team of image analysts--including train_4Kx4K, train_1Kx1K, val_4Kx4K, val_1Kx1K, test_4Kx4K as the training datasets and test_1Kx1K as the validation dataset. The dataset was constructed with 107812 images for training and 10157 images for validation. The deployable models were finetuned on the ICDAR2015 dataset with the trainable model as a pretrained weight. The ICDAR2015 dataset contains 1000 training images and 500 test images.


Evaluation Data

The OCDNet model was evaluated using the Uber-Text test dataset.

Methodology and KPI

The key performance indicator is the hmean of detection. The KPI for the evaluation data are reported below.

model dataset hmean
ocdnet_deformable_resnet18 Uber-Text 81.1%
ocdnet_deformable_resnet50 Uber-Text 82.2%

Real-time Inference Performance

The inference uses FP16 precision. The input shape is <batch>x3x640x640. The inference performance runs against an OCDNet-deployable model with trtexec on AGX Orin, Orin NX, Orin Nano, NVIDIA L4, NVIDIA L4, and NVIDIA A100 GPUs. The Jetson devices run at Max-N configuration for maximum system performance. The data is for inference-only performance. The end-to-end performance with streaming video data might vary slightly depending on the applications use case.

Model Device precision batch_size FPS
ocdnet_deformable_resnet18 Orin Nano FP16 32 31
ocdnet_deformable_resnet18 Orin NX FP16 32 46
ocdnet_deformable_resnet18 AGX Orin FP16 32 122
ocdnet_deformable_resnet18 T4 FP16 32 294
ocdnet_deformable_resnet18 L4 FP16 32 432
ocdnet_deformable_resnet18 A100 FP16 32 1786

How to Use This Model

This model needs to be used with NVIDIA Hardware and Software: The model can run on any NVIDIA GPU, including NVIDIA Jetson devices, with TAO Toolkit, DeepStream SDK or TensorRT.

The primary use case for this model is to detect text on images.

There are two types of models provided (both unpruned).

  • trainable
  • deployable

The trainable models are intended for training with the user's own dataset using TAO Toolkit. This can provide high-fidelity models that are adapted to the use case. A Jupyter notebook is available as a part of the TAO container and can be used to re-train.

The deployable models share the same structure as the trainable model, but in onnx format. The deployable models can be deployed using TensorRT, nvOCDR, and DeepStream.


Images of C x H x W (H and W should be multiples of 32.)


BBox or polygon coordinates for each detected text in the input image

Instructions to Use the Model with TAO

To use these models as pretrained weights for transfer learning, use the snippet below as a template for the model component of the experiment spec file to train an OCDNet model. For more information on the experiment spec file, refer to the TAO Toolkit User Guide.

  load_pruned_graph: False
  pruned_graph_path: '/results/prune/pruned_0.1.pth'
  pretrained_model_path: '/data/ocdnet/ocdnet_deformable_resnet18.pth'
  backbone: deformable_resnet18

Instructions to deploy the model with DeepStream

To create the entire end-to-end video analytic application, deploy this model with DeepStream SDK. DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. DeepStream supports direct integration of this model into the Deepstream sample app.

To deploy this model with DeepStream, follow these instructions.


Restricted Usage in Different Fields

The NVIDIA OCDNet trainable model is trained on Uber Text, which contains street-view images only. To get better accuracy in a specific field, more data is usually required to fine tune the pre-trained model with TAO Toolkit.

Model versions:

  • trainable_v1.0 - Pre-trained models for finetune.
  • deployable_v1.0 - Models deployable to deepstream.



  • Liao M., Wan Z., Yao C., Chen K., Bai X.: Real-time Scene Text Detection with Differentiable Binarization (2020).
  • Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., and Wei, Y: Deformable convolutional networks. (2017).
  • He, W., Zhang, X., Yin, F., and Liu, C.: Deep direct regression for multi-oriented scene text detection. (2017).

Using TAO Pre-trained Models


THe license to use these models is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these licenses.

Technical Blogs

Suggested Reading

Ethical AI

The NVIDIA OCDNet model detects optical characters.

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developers to ensure that it meets the requirements for the relevant industry and use case, that the necessary instructions and documentation are provided to understand error rates, confidence intervals, and results, and that the model is being used under the conditions and in the manner intended.