NGC Catalog
CLASSIC
Welcome Guest
Resources
Mistral HF 7B 25.01 (DGXC Benchmarking)

Mistral HF 7B 25.01 (DGXC Benchmarking)

For downloads and more information, please view on a desktop device.
Description
This recipe contains information and scripts to produce performance results for the Mistral Hugging Face 7B model fine-tuning workload using PEFT and FSDP.
Publisher
NVIDIA
Latest Version
25.01
Modified
February 13, 2025
Compressed Size
5.97 KB

Overview

This recipe contains information and scripts to produce performance results for the Mistral Hugging Face fine-tuning training workload using PEFT and FSDP. The scripts help perform environment setup, dataset setup, and launch benchmark jobs. This variant of the workload is best-suited for GPU clusters with:

  • At least 8 GPUs with at least 80 GB memory each. Fine tuning of this 7-billion parameter variant of the workload will not fit on fewer GPUs with less memory.
  • H100 GPUs. This workload runs with BF16, which is supported by H100 GPUs.

Expected Performance

Performance for HF Mistral fine tuning is measured by train samples per second, which is logged in the .out file associated with the job.

grep train_samples_per_second log-hf_mistral_7b_32_656947.out
{'train_runtime': 2950.1412, 'train_samples_per_second': 555.363, 'train_steps_per_second': 0.034, 'train_loss': 1.0721950674057006, 'epoch': 6.25}
Mistral 7b BF16 Train samples per second on 8x H100 GPUs Train samples per second on 16x H100 GPUs Train samples per second on 32x H100 GPUs Train samples per second on 64x H100 GPUs Train samples per second on 128x H100 GPUs Train samples per second on 256x H100 GPUs
Training samples per second 16.287 37.895 81.626 161.273 308.95 555.363

Prerequisites

This recipe requires access to Hugging Face Mistral. Instructions are below if needed.

Prepare Environment

Slurm

We reference a number of Slurm commands and parameters in this document. A brief summary is included below. It's important to note these are a guide and might not be applicable to all environments. Please consult with your system administrator for the parameters that are specific to your system.

Common parameters:

  • SBATCH_PARTITION or -p - Partition (or queue) to use.
  • SBATCH_ACCOUNT or -A - Slurm account to associate with your job, different from your user. Meant for accounting purposes.
  • SBATCH_GPUS_PER_NODE or --gres=gpu:<num gpus> - If your cluster is configured with GRES this should be set to all GPUs in a node. Ignore if not configured.
    • Encountering errors such as 'GPUs not found' or 'Cannot submit to this partition without GPU resources' means this setting is required.

These parameters can be set either by exporting the environment variable or using the corresponding sbatch flag.

Workload Setup

Create a staging area by running the setup.sh script. The script converts the docker image from nvcr.io/nvidia/pytorch:24.02-py3 to the nvidia+pytorch+24.02.sqsh file under the $STAGE_PATH folder and downloads DHS-LLM workshop source code.

# Set the path where all artifacts will be downloaded
export STAGE_PATH=<path to your shared file system folder> (e.g. /lustre/myproject/nemo)

# Run the setup
sbatch -A ${SBATCH_ACCOUNT} -p ${SBATCH_PARTITION} -N 1 ./setup.sh

Check the corresponding slurm-<job_id>.out file for status information.

Important: STAGE_PATH used in this step must be used when running the workload.

Request Access

Access to Mistral 7B must be requested on Hugging Face Mistral 7B.

Prepare Dataset

To download the model and dataset you will need to create a Hugging Face access token with READ privileges. You will use your HF user name and access token as the user/password for the git clones. For more information see: https://huggingface.co/docs/hub/en/security-tokens

Download the model:

Note: Cloning the model can take well over an hour, and you will be prompted twice for user/password. After the second prompt it'll appear as if it's hung.

cd $STAGE_PATH

# Only needs to be performed once
git lfs install

git clone https://huggingface.co/mistralai/Mistral-7B-v0.1

If the model download step was successful there should these files in the $STAGE_PATH/Mistral-7B-v0.1 folder.

README.md  config.json  generation_config.json  model-00001-of-00002.safetensors  model-00002-of-00002.safetensors  model.safetensors.index.json  pytorch_model-00001-of-00002.bin  pytorch_model-00002-of-00002.bin  pytorch_model.bin.index.json  special_tokens_map.json  tokenizer.json  tokenizer.model  tokenizer_config.json

Download the dataset:

git clone https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k

If the dataset clone step was successful there should these files in the $STAGE_PATH/ultrachat_200k/data folder

test_gen-00000-of-00001-3d4cd8309148a71f.parquet  test_sft-00000-of-00001-f7dfac4afe5b93f4.parquet  train_gen-00000-of-00003-a6c9fb894be3e50b.parquet  train_gen-00001-of-00003-d6a0402e417f35ca.parquet  train_gen-00002-of-00003-c0db75b92a2f48fd.parquet  train_sft-00000-of-00003-a3ecf92756993583.parquet  train_sft-00001-of-00003-0a1804bcb6ae68c6.parquet  train_sft-00002-of-00003-ee46ed25cfae92c6.parquet

Run Training

Once the environment has been prepared, it is time to train a model. Run the launch.sh script with sbatch for launching Hugging Face MISTRAL 7b model training on 1 to 32 nodes with BF16 precision. Log files will be located under ${STAGE_PATH}/results/$GSW_VERSION/bf16/7b/$JOB_TOTAL_GPUS.

sbatch -A ${SBATCH_ACCOUNT} -p ${SBATCH_PARTITION} -N ${NUM_NODES} ./launch.sh

Where:

  • NUM_NODES can be calculate by N_GPUS / N_GPUS_PER_NODE, N_GPUS_PER_NODE is 8 for DGX H100, therefore for 256 GPUs scale, NUM_NODES should be 256 / 8 = 32.
  • Slurm Settings for more information on Slurm parameters.

Profiling

We do not expose profiling options for this workload at this time.

Notes

accelerate launches on every node and pip install requirements.txt is run as part of srun command to ensure compute nodes have same environment. PYTHONPATH is set for this.