The NV One Click framework is a collection of tools and scripts that allow developers to prototype and share a simplified means of running their application on various on-premise and cloud platforms.
This guide will discuss how the framework will be used by Application users to deploy the Audio2Face-3D on their own on-premise or cloud infrastructure. Refer to the Audio2Face-3D documentation for more details about Audio2Face-3D.
IMPORTANT: Ensure using framework in compatible env and install required dependencies.
./envbuild.sh
or ./install-pre-requisites.sh
will prompt installation of above tools in case any are found missing/outdated.README.md
in artifact for detailed instructions based on the shape.The generated NV One Click deployable artifacts for Audio2Face-3D can be found in the current working directory.
$ ls *.tar.gz
deploy-audio2face3d-aws-cns.tar.gz deploy-audio2face3d-azure-cns.tar.gz
Folder Structure of the tarball
dist
├── ansible-requirements.yml
├── app-tasks.yml
├── config-files
├── config-template.yml
├── envbuild.sh
├── iac-ref
├── infra-tasks.yml
├── install-pre-requisites.sh
├── MANIFEST
├── modules
├── platform-tasks.yml
├── playbooks
├── README.md
└── setup-cns-access.sh
Item | Description |
---|---|
ansible-requirements.yml | These contain definitions of ansible roles and collections required by the playbooks. |
app-tasks.yml | These contain tasks to be executed during the app stage. |
config-template.yml | App Deployer overrides with required configs and creds. |
config-files | These are config files which will be used in tasks. |
envbuild.sh | App Deployer runs script to setup infra/app/both. |
iac-ref | Contains infra resource definitions (terraform) |
infra-tasks.yml | These contain tasks to be executed during the infra stage. |
install-pre-requisites.sh | App Deployer runs script to setup pre-requisites to run envbuild.sh. |
MANIFEST | Provides information about the source of the artifact. |
modules | These are Terraform modules. |
platform-tasks.yml | These contain tasks to be executed during the platform stage. |
playbooks | These are Ansible Plays. |
README.md | Contains install instructions for app deployer. |
setup-cns-access.sh | App Deployer runs script to setup local machine with access to the created CNS cluster(s). |
Application Deployer uses the developed artifact to deploy the app to their infrastructure env. Next steps outline how to override/configure the values and execute.
Below here examples for AWS package are provided. Please follow the same for Azure as well.
$ tar -xvf deploy-audio2face3d-aws-cns.tar.gz
$ cd dist/
sudo ls # Need passwordless sudo session for some time
chmod +x ./install-pre-requisites.sh
./install-pre-requisites.sh
# all pre-requisites are met
dist $ ./envbuild.sh
Usage: ./envbuild.sh (-v|--version)
or: ./envbuild.sh (-h|--help)
or: ./envbuild.sh (install/uninstall) (-c|--component <component>) [options]
or: ./envbuild.sh (force-destroy) [options]
or: ./envbuild.sh (info) [options]
install/uninstall components:
-c, --component one or more of all/infra/platform/app, pass arg multiple times for more than one
install/uninstall options:
-f, --config-file path to file containing config overrides, defaults to config.yml
-i, --skip-infra skip install/uninstall of infra component
-p, --skip-platform skip install/uninstall of platform component
-a, --skip-app skip install/uninstall of app component
-d, --dry-run don't make any changes, instead, try to predict some of the changes that may occur
-h, --help provide usage information
force-destroy options:
-f, --config-file path to file containing config overrides, defaults to config.yml
-h, --help provide usage information
info options:
-f, --config-file path to file containing config overrides, defaults to config.yml
-h, --help provide usage information
dist $ ./envbuild.sh install -c all --dry-run
config file (config.yml) not found
please use config-template.yml to create the config.yml
...
Copy from config-template.yml
and override values.
dist $ cp config-template.yml config.yml
Refer to dist/README.md
for detailed explanation on how to configure.
config.yml
schema_version: 'x.y.z'
name: # used as terraform state key + prefix for resource names
spec:
infra:
csp: 'aws' # replace accordingly
backend: # terraform state storage details
provider: # CSP infra provider details
platform: # details for platform level setup
app: # details for app level setup
Provide name of VM instance. The default vm name is as below. To start a fresh deployment on a new VM, provide a different name name: ‘audio2face3d-aws-cns’
Make sure to provide your machine's public IP address in access_cidrs section. or a list ip addresses to access the VM
export LOCAL_CIDR=$(curl -s ifconfig.me)
access_cidrs:
- '{{ lookup("env", "LOCAL_CIDR") }}/32'
If you have multiple IP addresses to access the VM, you can provide them as a list of IP addresses:
access_cidrs:
- '{{ lookup("env", "LOCAL_CIDR") }}/32'
- '216.228.112.20/32'
- '216.228.112.21/32'
Additionally, add/update your CSP instance details here. For example:
master:
size: 'g6e.2xlarge' # https://aws.amazon.com/ec2/instance-types/g6e/
You can update other sections in the config like 'backend' ..etc.
Then, export the environment variables:
#AWS
export AWS_ACCESS_KEY_ID=<your_aws_access_key>
export AWS_SECRET_ACCESS_KEY=<your_aws_secret_key>
export AWS_STATE_DYNAMODB_TABLE=<your_aws_dynamodb_table>
export AWS_STATE_BUCKET=<your_aws_state_bucket>
export AWS_STATE_REGION=<aws_instance_region> # ex: us-west-2
export NGC_CLI_API_KEY=<your_ngc_key>
export NVOC_SSH_PRIVATE_KEY_FILE=${HOME}/.ssh/id_rsa
export NVOC_SSH_PUBLIC_KEY_FILE=${HOME}/.ssh/id_rsa.pub
export LOCAL_CIDR=$(curl -s ifconfig.me)
export A2F_3D_CHART_VERSION=<Audio2Face-3D Helm chart version> # ex: 1.3.15
#AZURE RESOURCE MANAGER(ARM)
export ARM_TENANT_ID=<your_arm_tenant_id>
export ARM_SUBSCRIPTION_ID=<your_arm_subscription_id>
export ARM_CLIENT_ID=<your_arm_client_id>
export ARM_CLIENT_SECRET=<your_arm_client_secret>
export ARM_RESOURCE_GRP_NAME=<your_arm_resource_group_name>
export ARM_STORAGE_ACCT_NAME=<your_arm_storage_account_name>
export ARM_CONTAINER_NAME=<your_arm_container_name>
export ARM_LOCATION=<your_arm_location> # ex: 'West US'
export NGC_CLI_API_KEY=<your_ngc_key>
export NVOC_SSH_PRIVATE_KEY_FILE=${HOME}/.ssh/id_rsa
export NVOC_SSH_PUBLIC_KEY_FILE=${HOME}/.ssh/id_rsa.pub
export LOCAL_CIDR=$(curl -s ifconfig.me)
export A2F_3D_CHART_VERSION=<Audio2Face-3D Helm chart version> # ex: 1.3.15
Execute script to deploy and setup all components. This will setup the infra (aws ec2 instance with cns configured), and then platform level and app level helm charts to the k8s node.
dist $ ./envbuild.sh install -c all
After entire setup logs, you can expect an output like below with IPs and URLs shared at the end.
===========================================================================================
access_urls:
app:
grpc: http://audio2face3d-aws-cns-app-alb-1969029236.us-west-2.elb.amazonaws.com:30010/
nim: http://audio2face3d-aws-cns-app-alb-1969029236.us-west-2.elb.amazonaws.com:30020/
prometheus: http://audio2face3d-aws-cns-app-alb-1969029236.us-west-2.elb.amazonaws.com:30030/
ssh_command:
app:
bastion: ssh -i /home/arg/.ssh/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@44.246.238.111
master: ssh -i /home/arg/.ssh/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -i /home/arg/.ssh/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p ubuntu@44.246.238.111" ubuntu@10.0.1.14
===========================================================================================
===========================================================================================
access_urls:
app:
grpc: http://<52.37.215.148>:30010/
nim: http://52.37.215.148:30020/
prometheus: http://52.37.215.148:30030/
ssh_command:
app:
master: ssh -i /home/arg/.ssh/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@52.37.215.148
===========================================================================================
SSH into the master node which has kubectl configured using the commands shared in the output.
ssh -i /home/arg/.ssh/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -i /home/arg/.ssh/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p ubuntu@44.246.238.111" ubuntu@10.0.1.14
ssh -i /home/arg/.ssh/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ubuntu@52.37.215.148
Once within the instance, verify the helm charts have been deployed to their respective namespaces.
$ helm ls --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
audio2face3d-release default 1 2025-02-05 18:34:51.099874231 +0000 UTC deployed audio2face-3d-1.3.15
gpu-operator-1738740177 nvidia-gpu-operator 1 2025-02-05 07:22:59.754174946 +0000 UTC deployed gpu-operator-v24.6.2 v24.6.2
local-path-provisioner default 1 2025-02-05 07:26:25.849529634 +0000 UTC deployed local-path-provisioner-0.0.31 v0.0.30
Verify app setup by ensuring audio2face3d-release
pods are spun up and the node-port service exposed.
$ kubectl get po
NAME READY STATUS RESTARTS AGE
a2f-a2f-deployment-77ff75f956-2v7b8 1/1 Running 0 22m
dnsutils 1/1 Running 0 11h
local-path-provisioner-798c745db-qxrc2 1/1 Running 0 11h
Refer to the Audio2Face-3D public documentation regarding how to expose the pod IP publicly: https://docs.nvidia.com/ace/audio2face-3d-microservice/latest/text/deployment/kubernetes.html#optional-expose-the-pod-ip-publicly
Once finished, You can uninstall the Deployment using the below command:
dist $ ./envbuild.sh uninstall -c all
If you see any error like below, it means a VM is already instantiated with the CIDR 216.228.112.22/32
│ operation error EC2: ModifySecurityGroupRules, https response error StatusCode: 400, RequestID: 9ab42db1-381a-4faa-a8eb-9b569f91a48b,
│ api error InvalidPermission.Duplicate: the specified rule "peer: 216.228.112.22/32, TCP, from port: 30030, to port: 30030, ALLOW"
│ already exists
If you already know the VM instance details, you can ssh directly from your machine.
Else if you want to create a fresh VM instance - for the same config.yml, You can just delete the instance and re-run
./envbuild.sh uninstall -c all
./envbuild.sh install -c all
Enterprise Support Get access to knowledge base articles and support cases or submit a ticket: https://www.nvidia.com/en-us/data-center/products/ai-enterprise-suite/support/
You may not use the Software or any of its components for the purpose of emotion recognition. Any technology included in the Software may only be used as fully integrated in the Software and consistent with all applicable documentation.
This software is governed by the NVIDIA Software License Agreement and Product Specific Terms for AI Products. Use of the models is governed by the NVIDIA Community Model License.
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards. Please report security vulnerabilities or NVIDIA AI Concerns here.
You are responsible for ensuring that your use of NVIDIA AI Foundation Models complies with all applicable laws.