The omniverse AWS NLB Manager service helps with the management of a pool of targetgroups and listeners for a set of preconfigured Network load balancers in AWS.
This chart assumes that a Kubernetes cluster is already available and configured.
Before installing it's worth considering Image Pull Secrets and Ingress access.
NOTE: use --dry-run
to inspect the resulting Kubernetes manifests generated from the helm chart.
helm upgrade \
--install \
nv.ov.svc.streaming.aws.nlb \
. \
--create-namespace \
--namespace <<NAMESPACE-NAME>> \
--set global.imagePullSecrets[0].name=registry-secret
Container images are hosted on nvcr.io behind a private repository.
Therefore, you will need to create an Image Pull Secret to be able to pull the containers within Kubernetes.
NOTE: if the <<NAMESPACE-NAME>>
namespace does not exist, create it prior to creating the secret.
kubectl create namespace <<NAMESPACE-NAME>>
kubectl create secret docker-registry \
registry-secret \
--namespace <<NAMESPACE-NAME>> \
--docker-server="nvcr.io" \
--docker-username='$oauthtoken' \
--docker-password=<<YOUR-NGC-API-KEY>>
Then during install, specify the global image pull secret.
--set global.imagePullSecrets[0].name=registry-secret
The default ingress class used in this chart is nginx, but this can be changed by supplying --set global.ingress.annotations."kubernetes\.io/ingress\.class"="INGRESS-CONTROLLER"
The following two examples make use of nginx ingress, but other ingress providers can be utilized such as traefik.
It's worth getting familiar with the set of options available under the global.ingress
setting.
This example uses a simple local DNS setup.
For production setups you'll likely need to configure TLS cert secrets for the hosts (this is beyond the scope of this guide).
To use a custom local DNS, for example ov.local
, we'll need to update /etc/hosts and map it to the Node's Internal IP.
The resulting values to specify during helm install
will be --set global.ingress.host="ov.local"
This example also uses nginx ingress:
# install nginx controller if not already present.
helm install ingress-nginx \
ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.type=NodePort \
--set controller.service.nodePorts.http=32080
helm upgrade \
--install \
nv.ov.svc.streaming.aws.nlb \
. \
--create-namespace \
--namespace <<NAMESPACE-NAME>> \
--set global.imagePullSecrets[0].name=registry-secret
--set global.ingress.annotations."kubernetes\.io/ingress\.class"="nginx" \
--set global.ingress.host="ov.local"
kubectl get nodes -o=wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ov-local-control-plane Ready control-plane,master 3h22m v1.23.4 172.25.0.2 <none> Ubuntu 21.10 5.15.0-27-generic containerd://1.5.10
Add an entry to your /etc/hosts:
sudo vi /etc/hosts
172.25.0.2 ov.local
Get the port your NGINX is exposing in the NodePort service (32080 in this case)
kubectl get svc ingress-nginx-controller -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.68.95 <none> 80:32080/TCP,443:31312/TCP 47m
A quick check confirms we can reach the service from outside the cluster:
curl http://ov.local:32080/status
"OK"
When the ingress host is not specified, the ingress controller will default to using the controller service's IP which is then routed via the Node's IP.
Here's a setup using nginx ingress without a hostname.
# install nginx controller if not already present.
helm install ingress-nginx \
ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.type=NodePort \
--set controller.service.nodePorts.http=32080
helm upgrade \
--install \
nv.ov.svc.streaming.aws.nlb \
. \
--create-namespace \
--namespace <<NAMESPACE-NAME>> \
--set global.imagePullSecrets[0].name=registry-secret
--set global.ingress.annotations."kubernetes\.io/ingress\.class"="nginx" \
--set global.ingress.host=null
kubectl get svc ingress-nginx-controller -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.43.230.160 <none> 80:32080/TCP,443:32681/TCP
kubectl -n <<NAMESPACE-NAME>> get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
omniverse-example <none> * 10.43.230.160 80 10m
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-ov-local-server-0 Ready control-plane,master 21h v1.23.6+k3s1 172.18.0.3 <none> K3s dev 5.15.0-43-generic containerd://1.5.5
From above, port 80 is mapped to 32080 (and 443 to 32681) on the ingress-nginx-controller, and the node's internal IP is 172.18.0.3
A quick check confirms we can reach the service from outside the cluster:
curl http://172.18.0.3:32080/status
"OK"
To view values via helm use helm show values
NOTE: There are global values.
Key | Type | Default | Description |
---|---|---|---|
global.imagePullSecrets | list | [] |
Global image pull secrets used within the services. |
global.ingress.annotations | object | {"kubernetes.io/ingress.class":"nginx"} |
Global Ingress annotations. |
global.ingress.host | string | "" |
Global Ingress host. |
global.ingress.paths | list | [] |
Global Ingress paths. |
global.ingress.tls | list | [] |
Global Ingress tls. |
global.transportHost | string | "0.0.0.0" |
Specify the services transport host. For IPv6 use "::". |
nlb.affinity | object | {} |
Affinity for pod assignment. https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity |
nlb.deploymentLabels | object | {} |
Deployment spec labels |
nlb.env | list | [{"name":"OTEL_SERVICE_NAME","value":"nv.ov.svc.streaming.aws.nlb"},{"name":"OTEL_EXPORTER_OTLP_METRICS_PROTOCOL","value":"http/protobuf"}] |
Env for the container of the service. https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envvar-v1-core |
nlb.fullnameOverride | string | "nlb" |
Full override .fullname template |
nlb.image.pullPolicy | string | "Always" |
Image pull policy. |
nlb.image.repository | string | "nvcr.io/omniverse/prerel/nv-ov-svc-streaming-aws-nlb" |
Image repository. |
nlb.image.tag | string | "1.0.0-beta.3" |
Image tag. |
nlb.imagePullSecrets | list | [] |
Image Pull Secrets |
nlb.ingress.enabled | bool | false |
Enables the creation of Ingress resource. |
nlb.ingress.path | string | "/" |
Path for ingress. |
nlb.ingress.pathType | string | "Prefix" |
Path Type for ingress. |
nlb.livenessProbe | object | {"failureThreshold":5,"httpGet":{"path":"/health","port":"http"},"initialDelaySeconds":30,"periodSeconds":5} |
LivenessProbe for the service. NOTE: service must have an endpoint as specified by the "path" https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core |
nlb.monitoring.enabled | bool | false |
Enables the creation of ServiceMonitor resource. |
nlb.monitoring.prometheusNamespace | string | "monitoring" |
Prometheus namespace. |
nlb.name | string | "nlb" |
|
nlb.nameOverride | string | "" |
Partially override .fullname template (maintains the release name) |
nlb.nodeSelector | object | {} |
Node labels for pod assignment. https://kubernetes.io/docs/user-guide/node-selection/ |
nlb.podAnnotations | object | {} |
Pod annotations. https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ |
nlb.podSecurityContext | object | {} |
Security Context. https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod |
nlb.podlabels | object | {} |
Pod labels. |
nlb.readinessProbe | object | {"failureThreshold":5,"httpGet":{"path":"/ready","port":"http"},"initialDelaySeconds":30,"periodSeconds":5} |
readinessProbe for the service. NOTE: service must have an endpoint as specified by the "path" https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core |
nlb.replicaCount | int | 1 |
Number of replicas. |
nlb.resources | object | {} |
Container resource requests and limits. https://kubernetes.io/docs/user-guide/compute-resources/ |
nlb.revisionHistoryLimit | int | 5 |
|
nlb.securityContext | object | {"runAsNonRoot":true,"runAsUser":1000} |
Security Context. https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod |
nlb.service.annotations | object | {} |
Annotations |
nlb.service.containerPort | int | 8011 |
Container port. |
nlb.service.labels | object | {} |
Labels |
nlb.service.name | string | "nlb" |
Name of the service. |
nlb.service.port | int | 80 |
Service port. |
nlb.service.portName | string | "http" |
Port name. |
nlb.service.type | string | "ClusterIP" |
Kubernetes service type. |
nlb.serviceAccount | object | {"enabled":true,"name":"omni-streaming-aws-nlb-controller"} |
Service Account. |
nlb.serviceConfig | object | {"http":{"cors":{"allow_credentials":false,"allow_headers":["*"],"allow_methods":["*"],"allow_origins":["*"],"enabled":true}},"logging":{"level":"INFO","production_mode":true},"metrics":{"collector_url":"opentelemetry.ov.local:8443","export_interval_s":30,"export_metrics_to_collector":true,"export_metrics_to_console":false,"secure":false},"ports":{"tcp":{"port_start":41001,"tls":{"certificate_arn":"","enabled":false,"ssl_policy":""}},"udp":{"port_start":41026}},"prefix_url":"","resource":{"dns":{"alias":{"tag":{"key":""}}},"lookup":{"tag":{"key":"","value":""}}},"root_path":"","stream":{"limit":25},"tracing":{"enable_binary_file_exporter":false,"otlp_collector_url":"opentelemetry.ov.local:8443"}} |
Configuration specific to this service. |
nlb.serviceConfig.logging.level | string | "INFO" |
Log level for the application (valid levels; INFO, DEBUG, WARN, ERROR) |
nlb.serviceConfig.metrics | object | {"collector_url":"opentelemetry.ov.local:8443","export_interval_s":30,"export_metrics_to_collector":true,"export_metrics_to_console":false,"secure":false} |
Metrics related settings. |
nlb.serviceConfig.ports | object | {"tcp":{"port_start":41001,"tls":{"certificate_arn":"","enabled":false,"ssl_policy":""}},"udp":{"port_start":41026}} |
Port settings |
nlb.serviceConfig.ports.tcp | object | {"port_start":41001,"tls":{"certificate_arn":"","enabled":false,"ssl_policy":""}} |
TCP Port settings |
nlb.serviceConfig.ports.tcp.port_start | int | 41001 |
Starting TCP port to configure listener(s)/target group(s) on. |
nlb.serviceConfig.ports.tcp.tls.enabled | bool | false |
TLS Settings |
nlb.serviceConfig.ports.udp.port_start | int | 41026 |
Starting UDP port to configure listener(s)/target group(s) on. |
nlb.serviceConfig.prefix_url | string | "" |
URL prefix for the service. |
nlb.serviceConfig.resource | object | {"dns":{"alias":{"tag":{"key":""}}},"lookup":{"tag":{"key":"","value":""}}} |
Resource management settings |
nlb.serviceConfig.resource.dns | object | {"alias":{"tag":{"key":""}}} |
AWS Resource Tag Key (DNS alias) |
nlb.serviceConfig.resource.lookup | object | {"tag":{"key":"","value":""}} |
AWS Resource Tag Key (dynamic NLB lookup) |
nlb.serviceConfig.root_path | string | "" |
Root Path for the application. NOTE: useful when behind a proxy. https://fastapi.tiangolo.com/advanced/behind-a-proxy/ |
nlb.serviceConfig.stream | object | {"limit":25} |
Stream settings |
nlb.serviceConfig.stream.limit | int | 25 |
The stream limit to target. The service will try to create the required listener/target groups (protocols from mapping) on each NLB to reach this limit. |
nlb.serviceConfig.tracing | object | {"enable_binary_file_exporter":false,"otlp_collector_url":"opentelemetry.ov.local:8443"} |
Tracing related settings. |
nlb.startupProbe | object | {"failureThreshold":5,"httpGet":{"path":"/startup","port":"http"},"initialDelaySeconds":30,"periodSeconds":5} |
startupProbe for the service. NOTE: service must have an endpoint as specified by the "path" https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core |
nlb.tolerations | list | [] |
Tolerations for pod assignment. https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ |