Linux / amd64
NVIDIA NIM™, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed for secure, reliable deployment of high performance AI model inferencing across clouds, data centers and workstations. Supporting a wide range of AI models, including open-source and NVIDIA AI Foundation and custom models, it ensures seamless, scalable AI inferencing, on-premises or in the cloud, leveraging industry standard APIs.
The Maxine Eye Contact model redirects eye gaze for video conference applications and telepresence.
The Maxine eye contact model estimates the gaze direction of the input eye gaze and synthesizes a redirected gaze using a region of interest around one’s eyes known as an eye patch. More information about eye contact can be found in the developer blog here.
NVIDIA NIM offers prebuilt containers for computer vision models. Each NIM consists of a container and a model and uses a CUDA-accelerated runtime for all NVIDIA GPUs, with special optimizations available for many configurations. Whether on-premises or in the cloud, NIM is the fastest way to achieve accelerated inference at scale.
Deploying and integrating NVIDIA NIM is straightforward thanks to our industry standard APIs. Visit the Maxine Eye Contact NIM page for release documentation, deployment guides and more.
Get access to knowledge base articles and support cases or submit a ticket.
The NIM container is governed by the NVIDIA AI Enterprise Software License Agreement | NVIDIA; and the use of this model is governed by the AI foundation models community license. (nvidia.com).
You are responsible for ensuring that your use of NVIDIA AI Foundation Models complies with all applicable laws.