The Application and Profile APIs are designed to manage and introspect the applications and profiles. They encompass three main areas:
Manages the applications that are available in a deployment.
Manages the versions of an application that are available in an Omniverse Cloud instance. Application versions are in the format of major.minor.patch[-pre-release][+build-metadata].
1.0.0
2.1.3-beta.1
3.2.4-beta+001
4.3.5-rc.1+build.123
5.4.6+build.metadata
Predefined runtime environments for a given application allowing to fine tune an instance of an application. It allows pre-configuring a set of resources, additional configuration etc.
The primary use case for these APIs is to provide a detailed view of the applications, their versions and runtime profiles available in a deployment.
For developers aiming to integrate Omniverse Cloud Application Streams into custom web environments, these APIs provide the necessary endpoints to check application availability and initiate streams based on specific applications, heir versions and profiles
Before installing it's worth considering Image Pull Secrets and Ingress access. Options and information are available in the Deployment Notes
NOTE: use --dry-run
to inspect the resulting Kubernetes manifests generated from the helm chart.
helm upgrade \
--install \
nv.ov.svc.applications \
. \
--create-namespace \
--namespace <<NAMESPACE-NAME>> \
--set global.imagePullSecrets[0].name=registry-secret
To view values via helm use helm show values
NOTE: There are global values.
Key | Type | Default | Description |
---|---|---|---|
applications.affinity | object | {} |
Affinity for pod assignment. https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity |
applications.applications | object | {} |
Applications to make available as part of this service NOTE: The applications and applicationversions are custom CRDs that can also be created after this chart has been deployed. |
applications.env | object | {} |
Env for the container of the service. https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envvar-v1-core |
applications.fullnameOverride | string | "applications" |
Full override .fullname template |
applications.image.pullPolicy | string | "Always" |
Image pull policy. |
applications.image.repository | string | "nvcr.io/omniverse/prerel/nv-ov-svc-applications" |
Image repository. |
applications.image.tag | string | "1.0.0-beta.3" |
Image tag. |
applications.imagePullSecrets | list | [] |
Image Pull Secrets |
applications.ingress.enabled | bool | true |
Enables the creation of Ingress resource. |
applications.ingress.path | string | "/" |
Path for ingress. |
applications.ingress.pathType | string | "Prefix" |
Path Type for ingress. |
applications.livenessProbe | object | {"httpGet":{"path":"/health","port":"http"},"initialDelaySeconds":5,"periodSeconds":3} |
LivenessProbe for the service. NOTE: service must have an endpoint as specified by the "path" https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core |
applications.monitoring.enabled | bool | false |
Enables the creation of ServiceMonitor resource. |
applications.monitoring.prometheusNamespace | string | "monitoring" |
Prometheus namespace. |
applications.name | string | "applications" |
|
applications.nameOverride | string | "" |
Partially override .fullname template (maintains the release name) |
applications.nodeSelector | object | {} |
Node labels for pod assignment. https://kubernetes.io/docs/user-guide/node-selection/ |
applications.podAnnotations | object | {} |
Pod annotations. https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ |
applications.podSecurityContext | object | {"runAsNonRoot":false} |
Security Context. https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod |
applications.profiles | object | {} |
Profiles to make available as part of this service NOTE: applicationprofiles are custom CRDs that can also be created after this chart has been deployed. |
applications.readinessProbe | object | {"httpGet":{"path":"/ready","port":"http"},"initialDelaySeconds":5,"periodSeconds":3} |
readinessProbe for the service. NOTE: service must have an endpoint as specified by the "path" https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core |
applications.replicaCount | int | 1 |
Number of replicas. |
applications.resources | object | {} |
Container resource requests and limits. https://kubernetes.io/docs/user-guide/compute-resources/ |
applications.revisionHistoryLimit | int | 5 |
|
applications.securityContext | object | {} |
Security Context. https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod |
applications.service.containerPort | int | 8080 |
Container port. |
applications.service.name | string | "applications" |
Name of the service. |
applications.service.port | int | 80 |
Service port. |
applications.service.portName | string | "http" |
Port name. |
applications.service.resolverContainerPort | int | 8081 |
Container port for resolver container |
applications.service.resolverName | string | "resolver" |
Name of the service for the resolver container |
applications.service.type | string | "ClusterIP" |
Kubernetes service type. |
applications.serviceConfig | object | {"app_store_args":{"api_version":"v1","namespace":""},"app_store_cls":"nv.ov.svc.applications.store.crd.CRDApplicationStore","logLevel":"INFO","prefix_url":"cfg/apps","root_path":"","show_status_endpoint":false} |
Configuration specific to this service. |
applications.serviceConfig.app_store_args | object | {"api_version":"v1","namespace":""} |
App store arguments |
applications.serviceConfig.app_store_cls | string | "nv.ov.svc.applications.store.crd.CRDApplicationStore" |
App store class |
applications.serviceConfig.logLevel | string | "INFO" |
Log level for the application (valid levels; INFO, DEBUG, WARN, ERROR) |
applications.serviceConfig.prefix_url | string | "cfg/apps" |
URL prefix for the service. |
applications.serviceConfig.root_path | string | "" |
Root Path for the application. NOTE: useful when behind a proxy. https://fastapi.tiangolo.com/advanced/behind-a-proxy/ |
applications.serviceConfig.show_status_endpoint | bool | false |
Show /status and /metrics endpoints from the Open API specification |
applications.startupProbe | object | {"httpGet":{"path":"/startup","port":"http"},"initialDelaySeconds":5,"periodSeconds":3} |
startupProbe for the service. NOTE: service must have an endpoint as specified by the "path" https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core |
applications.tolerations | list | [] |
Tolerations for pod assignment. https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ |
global.imagePullSecrets | list | [] |
Global image pull secrets used within the services. |
global.ingress.annotations | object | {"kubernetes.io/ingress.class":"nginx"} |
Global Ingress annotations. |
global.ingress.host | string | "" |
Global Ingress host. |
global.ingress.paths | list | [] |
Global Ingress paths. |
global.ingress.tls | list | [] |
Global Ingress tls. |
global.transportHost | string | "0.0.0.0" |
Specify the services transport host. For IPv6 use "::". |
container images are hosted on nvcr.io behind a private repository.
Therefore, you will need to create an Image Pull Secret to be able to pull the containers within Kubernetes.
NOTE: if the <<NAMESPACE-NAME>>
namespace does not exist, create it prior to creating the secret.
kubectl create namespace <<NAMESPACE-NAME>>
kubectl create secret docker-registry \
registry-secret \
--namespace <<NAMESPACE-NAME>> \
--docker-server="nvcr.io" \
--docker-username='$oauthtoken' \
--docker-password=<<YOUR-NGC-API-KEY>>
Then during install, specify the global image pull secret.
--set global.imagePullSecrets[0].name=registry-secret
The default ingress class used in this chart is nginx, but this can be changed by supplying --set global.ingress.annotations."kubernetes\.io/ingress\.class"="INGRESS-CONTROLLER"
The following two examples make use of nginx ingress, but other ingress providers can be utilized such as traefik.
It's worth getting familiar with the set of options available under the global.ingress
setting.
This example uses a simple local DNS setup.
For production setups you'll likely need to configure TLS cert secrets for the hosts (this is beyond the scope of this guide).
To use a custom local DNS, for example ov.local
, we'll need to update /etc/hosts and map it to the Node's Internal IP.
The resulting values to specify during helm install
will be --set global.ingress.host="ov.local"
This example also uses nginx ingress:
# install nginx controller if not already present.
helm install ingress-nginx \
ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.type=NodePort \
--set controller.service.nodePorts.http=32080
helm upgrade \
--install \
nv.ov.svc.applications \
. \
--create-namespace \
--namespace <<NAMESPACE-NAME>> \
--set global.imagePullSecrets[0].name=registry-secret
--set global.ingress.annotations."kubernetes\.io/ingress\.class"="nginx" \
--set global.ingress.host="ov.local"
kubectl get nodes -o=wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ov-local-control-plane Ready control-plane,master 3h22m v1.23.4 172.25.0.2 <none> Ubuntu 21.10 5.15.0-27-generic containerd://1.5.10
Add an entry to your /etc/hosts:
sudo vi /etc/hosts
172.25.0.2 ov.local
Get the port your NGINX is exposing in the NodePort service (32080 in this case)
kubectl get svc ingress-nginx-controller -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.68.95 <none> 80:32080/TCP,443:31312/TCP 47m
A quick check confirms we can reach the service from outside the cluster:
curl http://ov.local:32080/status
"OK"
When the ingress host is not specified, the ingress controller will default to using the controller service's IP which is then routed via the Node's IP.
Here's a setup using nginx ingress without a hostname.
# install nginx controller if not already present.
helm install ingress-nginx \
ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.type=NodePort \
--set controller.service.nodePorts.http=32080
helm upgrade \
--install \
nv.ov.svc.applications \
. \
--create-namespace \
--namespace <<NAMESPACE-NAME>> \
--set global.imagePullSecrets[0].name=registry-secret
--set global.ingress.annotations."kubernetes\.io/ingress\.class"="nginx" \
--set global.ingress.host=null
kubectl get svc ingress-nginx-controller -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.43.230.160 <none> 80:32080/TCP,443:32681/TCP
kubectl -n <<NAMESPACE-NAME>> get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
omniverse-example <none> * 10.43.230.160 80 10m
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-ov-local-server-0 Ready control-plane,master 21h v1.23.6+k3s1 172.18.0.3 <none> K3s dev 5.15.0-43-generic containerd://1.5.5
From above, port 80 is mapped to 32080 (and 443 to 32681) on the ingress-nginx-controller, and the node's internal IP is 172.18.0.3
A quick check confirms we can reach the service from outside the cluster:
curl http://172.18.0.3:32080/status
"OK"