NGC | Catalog
Welcome Guest
CatalogHelm ChartsMorpheus AI Engine

Morpheus AI Engine

For versions and more information, please view on a desktop device.
Logo for Morpheus AI Engine

Description

A Helm chart for deploying the infrastructure of the Morpheus AI Engine. It includes the Triton Inference Server, Kafka, and Zookeeper.

Publisher

NVIDIA

Latest Version

22.04

Compressed Size

19.4 KB

Modified

April 28, 2022

Overview

NVIDIA Morpheus is an open AI application framework that provides cybersecurity developers with a highly optimized AI pipeline and pre-trained AI capabilities that, for the first time, allow them to instantaneously inspect all IP traffic across their data center fabric. Bringing a new level of security to data centers, Morpheus provides dynamic protection, real-time telemetry, adaptive policies, and cyber defenses for detecting and remediating cybersecurity threats.

NOTE: This chart deploys publicly available images that originate from Docker Hub, specifically Kafka and Zookeeper. NVIDIA makes no representation as to support or suitability for production purposes of these container images.

Setup

The Morpheus AI Engine container is packaged as a Kubernetes (aka k8s) deployment using a Helm chart. NVIDIA provides installation instructions for the NVIDIA Cloud Native Core Stack which incorporates the setup of these platforms and tools. Morpheus and its use of Triton Inference Server are initially designed to use the T4 (e.g., the G4 instance type in AWS EC2), V100 (P3), or A100 family of GPU (P4d).

NGC API Key

First, you will need to set up your NGC API Key to access all the Morpheus components, using the instructions from the NGC Registry CLI User Guide. Once you have created your API key, create an environment variable containing your API key for use by the commands used further in these instructions:

export API_KEY="<your key>"

After installing EGX Stack, install and configure the NGC Registry CLI using the instructions from the NGC Registry CLI User Guide.

Create Namespace for Morpheus

Create a namespace and an environment variable for the namespace to organize the k8s cluster deployed via EGX Stack and logically separate Morpheus-related deployments from other projects using the following command:

kubectl create namespace <some name>
export NAMESPACE="<some name>"

Install Morpheus AI Engine

The Morpheus AI Engine consists of the following components:

  • NVIDIA Triton Inference Server [ai-engine] from NVIDIA for processing inference requests.
  • Apache Kafka [broker] to consume and publish messages.
  • Apache Zookeeper [zookeeper] to maintain coordination between the Kafka Brokers.

Install the chart as follows:

helm fetch https://helm.ngc.nvidia.com/nvidia/morpheus/charts/morpheus-ai-engine-22.04.tgz --username='$oauthtoken' --password=$API_KEY --untar
helm install --set ngc.apiKey="$API_KEY" \
 --set aiengine.args="{tritonserver,--model-repository=/common/models,--model-control-mode=explicit}" \
 --namespace $NAMESPACE \
 morpheus1 \
 morpheus-ai-engine