Linux / amd64
NVIDIA NIMâ„¢, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing across clouds, data centers and workstations. Supporting a wide range of AI models, including open-source and NVIDIA AI Foundation and custom models, it ensures seamless, scalable AI inferencing, on-premises or in the cloud, leveraging industry standard APIs.
The Maxine Studio Voice model provides accelerated performance for real-time speech enhancement by enhancing input speech recorded through low-quality microphones in noisy and reverberant environments to studio-recorded quality speech. This NIM offers three modes for enhancing speech at different sample rates: Quality Mode at 48kHz, Quality Mode at 16khz, and Low Latency Mode at 48kHz.
NVIDIA NIM offers prebuilt containers for AI models across computer vision, audio, LLMs, and more. Each NIM consists of a container and a model and uses a CUDA-accelerated runtime for all NVIDIA GPUs, with special optimizations available for many configurations. Whether on-premises or in the cloud, NIM is the fastest way to achieve accelerated inference at scale.
Deploying and integrating NVIDIA NIM is straightforward thanks to our industry-standard APIs. Visit the Maxine Studio Voice NIM page for release documentation, deployment guides, and more.
Get access to knowledge base articles and support cases or submit a ticket.
The NIM container is governed by the NVIDIA AI Enterprise Software License Agreement; and the use of this model is governed by the NVIDIA AI Foundation Models Community License.
You are responsible for ensuring that your use of NVIDIA AI Foundation Models complies with all applicable laws.