How to call after training lora weights using train_lcm_distill_lora_sd_wds.py #10391
Unanswered
yangzhenyu6
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I use the following script to get the training results, but I get an error when using Dreamshaper7 as a pre-trained model, I also specified Dreamshaper7 as the teacher model when training.:
#!/bin/bash
Define the variables
PRETRAINED_TEACHER_MODEL="/ai/yzy/latent-consistency-model-main/LCM_Dreamshaper_v7"
OUTPUT_DIR="/ai/yzy/latent-consistency-model-main/output"
RESOLUTION=512
LORA_RANK=64
LEARNING_RATE=1e-6
LOSS_TYPE='huber'
ADAM_WEIGHT_DECAY=0.0
MAX_TRAIN_SAMPLES=200000
DATALOADER_NUM_WORKERS=4
TRAIN_SHARDS_PATH_OR_URL='/ai/yzy/latent-consistency-model-main/dataset.tar'
VALIDATION_STEPS=50
CHECKPOINTING_STEPS=50
CHECKPOINTS_TOTAL_LIMIT=10
TRAIN_BATCH_SIZE=8
GRADIENT_ACCUMULATION_STEPS=1
SEED=453645634
Run the training script
python ./LCM_Training_Script/consistency_distillation/train_lcm_distill_lora_sd_wds.py
--pretrained_teacher_model=$PRETRAINED_TEACHER_MODEL
--output_dir=$OUTPUT_DIR
--mixed_precision=fp16
--resolution=$RESOLUTION
--lora_rank=$LORA_RANK
--learning_rate=$LEARNING_RATE
--loss_type=$LOSS_TYPE
--adam_weight_decay=$ADAM_WEIGHT_DECAY
--max_train_samples=$MAX_TRAIN_SAMPLES
--dataloader_num_workers=$DATALOADER_NUM_WORKERS
--train_shards_path_or_url=$TRAIN_SHARDS_PATH_OR_URL
--validation_steps=$VALIDATION_STEPS
--checkpointing_steps=$CHECKPOINTING_STEPS
--checkpoints_total_limit=$CHECKPOINTS_TOTAL_LIMIT
--train_batch_size=$TRAIN_BATCH_SIZE
--gradient_checkpointing
--enable_xformers_memory_efficient_attention
--gradient_accumulation_steps=$GRADIENT_ACCUMULATION_STEPS
--use_8bit_adam
--resume_from_checkpoint=latest
--num_train_epochs=10
--seed=$SEED
Beta Was this translation helpful? Give feedback.
All reactions