NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed to accelerate deployment of generative AI across cloud, data center, and workstations.
Benefits of self-hosted NIMs:
Deploy anywhere and maintain control of generative AI applications and data
Streamline AI application development with industry standard APIs and tools tailored for enterprise environments
Prebuilt containers for the latest generative AI models, offering a diverse range of options and flexibility right out of the gate
Industry-leading latency and throughput for cost-effective scaling
Support for custom models out of the box so models can be trained on domain specific data
Enterprise-grade software with dedicated feature branches, rigorous validation processes, and robust support structures
DiffDock is a state-of-the-art generative model for blind molecular docking pose estimation. It requires protein and molecule 3D structures as input and does not require any information about a binding pocket. During its diffusion process, the position of the molecule relative to the protein, its orientation, and the torsion angles are allowed to change. By running the learned reverse diffusion process, it transforms a distribution of noisy prior molecule poses to the one learned by the model. As a result, it outputs many sampled poses and ranks them via its confidence model.
For optimal performance, deploy the supported NVIDIA AI Enterprise Infrastructure software with this NIM.
Before you start, ensure that your environment is set up by following one of the deployment guides available in the NVIDIA AI Enterprise Documentation.
Get access to knowledge base articles and support cases or submit a ticket.
Visit the NVIDIA AI Enterprise Documentation Hub for release documentation, deployment guides and more.
This NIM is licensed under the NVIDIA AI Product Agreement. By downloading and using the artifacts in this collection, you accept the terms and conditions of this license.