Cosmos World Foundation Models: A family of highly performant pre-trained world foundation models purpose-built for generating physics-aware videos and world states for physical AI development.
The Cosmos autoregressive models are a collection of pre-trained world foundation models that are ideal for predicting and rapidly generating video sequences from video or image inputs for physical AI. They can serve as the building block for various applications or research that are related to world generation. The models are ready for commercial use under NVIDIA Open Model license agreement.
Model Developer: NVIDIA
In Cosmos 1.0 release, the Cosmos Autoregressive WFM family includes the following models:
This model is released under the NVIDIA Open Model License. For a custom license, please contact cosmos-license@nvidia.com.
Under the NVIDIA Open Model License, NVIDIA confirms:
Important Note: If you bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism contained in the Model, your rights under NVIDIA Open Model License Agreement will automatically terminate.
Cosmos-1.0-Autoregressive-4B is an autoregressive transformer model designed for world generation. The network is composed of interleaved self-attention and feedforward layers as its building blocks.
Input
Output
Runtime Engine(s):
Supported Hardware Microarchitecture Compatibility:
Note: We have only tested doing inference with BF16 precision.
Operating System(s):
Please see our technical paper for detailed evaluations.
These numbers may vary based on system specifications and are provided for reference only.
Offloading Strategy | Cosmos-1.0-Autoregressive-4B | Cosmos-1.0-Autoregressive-12B |
---|---|---|
No offloading | 31.3 GB | 47.5 GB |
Guardrails | 28.9 GB | 45.2 GB |
Guardrails & Diffusion decoder | 28.5 GB | 43.1 GB |
Guardrails & Diffusion decoder & Tokenizer | 27.3 GB | 42.9 GB |
Guardrails & Diffusion decoder & Tokenizer & AR model | 18.7 GB | 27.4 GB |
End-to-end inference runtime on one H100 without offloading and after model initialization:
Cosmos-1.0-Autoregressive-4B | Cosmos-1.0-Autoregressive-12B |
---|---|
~62 seconds | ~119 seconds |
Our models now support video extension up to 33 frames. Starting from either a single image or a 9-frame video input, it can generate the remaining frames to reach the 33-frame length (generating 32 or 24 frames respectively).
We have evaluated all eight possible configurations (4 models × 2 vision input types: image or video) using 100 test videos from physical AI domains. Below are the failure rates for each configuration:
Model | Image input | Video input (9 frames) |
---|---|---|
Cosmos-1.0-Autoregressive-4B | 15% | 1% |
Cosmos-1.0-Autoregressive-5B-Video2World | 7% | 2% |
Cosmos-1.0-Autoregressive-12B | 2% | 1% |
Cosmos-1.0-Autoregressive-13B-Video2World | 3% | 0% |
We define failure cases as videos with severe distortions, such as:
Note that the following are not considered failures in our analysis:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below. Please report security vulnerabilities or NVIDIA AI Concerns here.