Skip to content

Latest commit

 

History

History

lora-benchmarking

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 

Fine-Tuning Benchmarking

The fine-tuning benchmarking blueprint streamlines infrastructure benchmarking for fine-tuning using the MLCommons methodology. It fine-tunes a quantized Llama-2-70B model and a standard dataset.

Once complete, benchmarking results, such as training time and resource utilization, are available in MLFlow and Grafana for easy tracking. This blueprint enables data-driven infrastructure decisions for your fine-tuning jobs.

Pre-Filled Samples

Title Description
LoRA fine-tuning of quantitized Llama-2-70B model on A100 node using MLCommons methodology Deploys LoRA fine-tuning of quantitized Llama-2-70B model on A100 node using MLCommons methodology on BM.GPU.A100.8 with 8 GPU(s).