NGC | Catalog
CatalogModelsModulus Checkpoints: Ahmed Body MeshGraphNet

Modulus Checkpoints: Ahmed Body MeshGraphNet

Logo for Modulus Checkpoints: Ahmed Body MeshGraphNet
Description
MeshGraphNet model package for external aerodynamics evaluation of Ahmed body-type geometries.
Publisher
NVIDIA
Latest Version
v0.2
Modified
October 16, 2023
Size
27.27 MB

Details

This NGC asset is a MeshGraphNet model checkpoint package trained for Ahmed Body geometries. Model checkpoint package refers to the set of artifacts needed to run inference using pre-trained model which includes the model checkpoint, set of sample inputs, inference script.

Architecture

GNNs are well suited for challenging problems involving intricate graph structures, such as those encountered in physics, biology, and social networks. By leveraging the structure of graphs, GNNs are capable of learning and making predictions based on the relationships among nodes in a graph. MeshGraphNet architecture based on the work by Tobias et al, The pretrained model checkpoint comes from the Ahmed body example as described in this example.

Training

The AeroGraphNet model is based on the MeshGraphNet architecture which is instrumental for learning from mesh-based data using GNNs. The inputs to the model are: Ahmed body surface mesh, Reynolds number, geometry parameters (optional, including length, width, height, ground clearance, slant angle, and fillet radius), surface normals (optional). The output of the model are: surface pressure, wall shear stresses and Drag coefficient.

The input to the model is in form of a .vtp file and is then converted to bi-directional DGL graphs in the dataloader. The final results are also written in the form of .vtp files in the inference code. A hidden dimensionality of 256 is used in the encoder, processor, and decoder. The encoder and decoder consist of two hidden layers, and the processor includes 15 message passing layers. Batch size per GPU is set to 1. Summation aggregation is used in the processor for message aggregation. A learning rate of 0.0001 is used, decaying exponentially with a rate of 0.99985. Training is performed on 8 NVIDIA A100 GPUs, leveraging data parallelism. Total training time is 4 hours, and training is performed for 500 epochs.

How to use?

A minimal inference script is provided in this model checkpoint package to get you started easily.

To run inference on this checkpoint, you can follow the below steps (all the files needed are included in the zip file):

  1. Launch Modulus Docker Container

    docker run --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 --runtime nvidia -v ${PWD}:/examples -it nvcr.io/nvidia/modulus/modulus:23.09
    
  2. This example requires the latest version of nvidia-modulus. Inside the container, update the version using below

    pip install git+https://github.com/NVIDIA/modulus.git
    
  3. Download this checkpoint zip file and unzip it

    wget 'https://api.ngc.nvidia.com/v2/models/nvidia/modulus/modulus_ahmed_body_meshgraphnet/versions/v0.2/files/ahmed_body_mgn.zip'
    unzip ahmed_body_mgn.zip
    
  4. Run the simple inference

    cd ahmed_body_mgn/
    python inference.py