Skip to content

Latest commit

 

History

History
10 lines (5 loc) · 439 Bytes

File metadata and controls

10 lines (5 loc) · 439 Bytes

DiT Diffusion Transformer for Face Generation

This project implements a Diffusion Transformer (DiT) model trained on Celeba latents

Generated Samples

generated_epoch_50

The model was trained for 15 epochs using a DiT architecture with latent diffusion on compressed VAE representations of Celeba images.