Training scripts for Flux/Flux Kontext (non-DreamBooth) #12274
aungsiminhtet
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
While reviewing the training examples in diffusers, I noticed that for Stable Diffusion we have several well-structured scripts covering different scenarios:
examples/text_to_image/train_text_to_image.py
— fine-tunes the UNet parametersexamples/text_to_image/train_text_to_image_lora.py
— trains LoRA layers in the UNetexamples/dreambooth/train_dreambooth.py
— fine-tunes the UNet and optionally the text encoderexamples/dreambooth/train_dreambooth_lora.py
— same as above, but with LoRA layers instead of full parameter tuningHowever, for Flux and Flux Kontext, I don’t see equivalent training scripts outside of DreamBooth. From what I understand, this means that if someone wants to fine-tune these models directly (e.g., for text-to-image tasks without DreamBooth personalization), they currently don’t have a ready-made example.
My questions are:
If the latter, I'd be interested in contributing scripts like
train_text_to_image_flux.py
andtrain_text_to_image_lora_flux.py
, adapted from the Stable Diffusion versions. While I know the difference between these scripts is not huge, having non-DreamBooth training examples for Flux/Flux Kontext could be useful for some practitioners who want to fine-tune directly on new datasets without the DreamBooth setup.Thanks for your insights!
Beta Was this translation helpful? Give feedback.
All reactions