Skip to content

Latest commit

 

History

History
112 lines (77 loc) · 7.74 KB

README.md

File metadata and controls

112 lines (77 loc) · 7.74 KB

ML Training Reference Architectures & Tests

Warning We are currently undergoing a major refactoring of this repository, particularly focused on the test cases section. If you prefer to use the previous directory structure and deprecated test cases, please refer to v1.1.0.

This repository contains reference architectures and test cases for distributed model training with Amazon SageMaker Hyperpod, AWS ParallelCluster, AWS Batch, and Amazon EKS. The test cases cover different types and sizes of models as well as different frameworks and parallel optimizations (Pytorch DDP/FSDP, MegatronLM, NemoMegatron...).

The major components of this directory are:

reference-architectures/
|-- 1.architectures/               # CloudFormation templates for reference arch
|-- 2.ami_and_containers/          # Scripts to create AMIs and container images
|-- 3.test_cases/                  # Reference test cases and/or benchmark scripts
|-- 4.validation_observability/    # Tools to measure performance or troubleshoot
`-- ...

NOTE: the architectures are designed to work with the S3 bucket and VPC created using reference templates 1.architectures/0.s3/ and 1.architectures/1.vpc_network/. You're strongly recommended to deploy these two templates before deploying any of the reference architectures.

0. Workshops

You can follow the workshop below to train models on AWS. Each contains examples for several test cases as well as nuggets of information on operating a cluster for LLM training.

Name Comments
Amazon SageMaker HyperPod Workshop for SageMaker HyperPod, shows how to deploy and monitor it
AWS ParallelCluster Similar workshop as HyperPod but on ParallelCluster
Amazon SageMaker HyperPod EKS Workshop for SageMaker HyperPod EKS, shows how to deploy and monitor it

1. Architectures

Architectures are located in 1.architectures and consists of utilities and service related architectures.

Name Category Usage
0.s3 Storage Create an S3 bucket
1.vpc_network Network Create a VPC with subnets required resources
2.aws-parallelcluster Compute Cluster templates for GPU & custom silicon training
3.aws-batch Compute AWS Batch template for distributed training
4.amazon-eks Compute Manifest files to train with Amazon EKS
5.sagemaker-hyperpod Compute SageMaker HyperPod template for distributed training

More will come, feel free to add new ones (ex. Ray?). You will also find documentation for EFA and the recommended environment variables.

2. Custom Amazon Machine Images

Custom machine images can be built using Packer for AWS ParallelCluster, Amazon EKS and plain EC2. These images are based are on Ansible roles and playbooks.

3. Test cases

Test cases are organized by framework and cover various distributed training scenarios. Each test case includes the necessary scripts and configurations to run distributed training jobs.

PyTorch Test Cases

  • FSDP/ - Fully Sharded Data Parallel training examples
  • megatron-lm/ - Megatron-LM distributed training examples
  • nemo-launcher/ - NeMo Launcher examples for distributed training. This test case is for NeMo version 1.0 only.
  • nemo-run/ - NeMo framework distributed training examples. This test case is for NeMo version 2.0+.
  • neuronx-distributed/ - AWS Trainium distributed training examples
  • mosaicml-composer/ - MosaicML Composer examples
  • picotron/ - PicoTron distributed training examples
  • torchtitan/ - TorchTitan examples
  • cpu-ddp/ - CPU-based Distributed Data Parallel examples
  • bionemo/ - BioNeMo distributed training examples

JAX Test Cases

  • jax/ - JAX-based distributed training examples using PaxML

Each test case includes:

  • Training scripts and configurations
  • Container definitions (where applicable)
  • Launch scripts for different cluster types
  • Performance monitoring and validation tools

4. Validation scripts

Utilities scripts and micro-benchmarks examples are set under 4.validation_scripts/. The EFA Prometheus exporter can be found in this directory

Name Comments
1.pytorch-env-validation Validates your PyTorch environment
3.efa-node-exporter Node exporter with Amazon EFA monitoring modules
4.prometheus-grafana Deployment assets to monitor SageMaker Hyperpod Clusters
5.nsight Shows how to run Nvidia Nsight Systems to profile your workload
efa-versions.py Get the versions of Nvidia libraries, drivers and EFA drivers

5. CI

Integration tests are written in pytest. Just run:

pytest .

Alternatively you can run tests with out capturing stdout and keeping all docker images an other artifacts.

pytest -s --keep-artifacts=t

6. Contributors

Thanks to all the contributors for building, reviewing and testing.

Contributors

7.Star History

Star History Chart