Skip to content

Optimizing PyTorch Model Training by Wrapping Memory Mapped Tensors on an Nvidia GPU with TensorDict.

Notifications You must be signed in to change notification settings

OriYarden/pytorch_training_optimization_using_tensordict_memory_mapping

Repository files navigation

pytorch_training_optimization_using_tensordict_memory_mapping

Optimizing PyTorch training by wrapping torch.utils.data.Dataset with tensordict.TensorDict.MemoryMappedTensor mapped, pinned, and loaded onto an Nvidia GPU and inputting TensorDict(Dataset) into torch.utils.data.DataLoader--to boost model training speed.

To run the demo:

git clone https://github.com/OriYarden/pytorch_training_optimization_using_tensordict_memory_mapping
cd pytorch_training_optimization_using_tensordict_memory_mapping
python run_demo.py

Training 1 Epoch via torch.utils.data.Dataset:

demo_dataloader

Training 1 Epoch via tensordict.TensorDict.MemoryMappedTensor(torch.utils.data.Dataset):

demo_td_dataloader

TensorDict Memory Mapping boosts training speed.

The initial wrapping runtime is approximately equal to 1 epoch of torch.utils.data.Dataset:

demo_td_wrapper