Skip to content

Commit ce338d4

Browse files
authored
[docs] LoRA metadata (#11848)
* draft * hub image * update * fix
1 parent bc55b63 commit ce338d4

File tree

1 file changed

+17
-26
lines changed

1 file changed

+17
-26
lines changed

docs/source/en/using-diffusers/other-formats.md

Lines changed: 17 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -70,41 +70,32 @@ pipeline = StableDiffusionPipeline.from_single_file(
7070
</hfoption>
7171
</hfoptions>
7272

73-
#### LoRA files
73+
#### LoRAs
7474

75-
[LoRA](https://hf.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) is a lightweight adapter that is fast and easy to train, making them especially popular for generating images in a certain way or style. These adapters are commonly stored in a safetensors file, and are widely popular on model sharing platforms like [civitai](https://civitai.com/).
75+
[LoRAs](../tutorials/using_peft_for_inference) are lightweight checkpoints fine-tuned to generate images or video in a specific style. If you are using a checkpoint trained with a Diffusers training script, the LoRA configuration is automatically saved as metadata in a safetensors file. When the safetensors file is loaded, the metadata is parsed to correctly configure the LoRA and avoids missing or incorrect LoRA configurations.
7676

77-
LoRAs are loaded into a base model with the [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] method.
77+
The easiest way to inspect the metadata, if available, is by clicking on the Safetensors logo next to the weights.
78+
79+
<div class="flex justify-center">
80+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/safetensors_lora.png"/>
81+
</div>
82+
83+
For LoRAs that aren't trained with Diffusers, you can still save metadata with the `transformer_lora_adapter_metadata` and `text_encoder_lora_adapter_metadata` arguments in [`~loaders.FluxLoraLoaderMixin.save_lora_weights`] as long as it is a safetensors file.
7884

7985
```py
80-
from diffusers import StableDiffusionXLPipeline
8186
import torch
87+
from diffusers import FluxPipeline
8288

83-
# base model
84-
pipeline = StableDiffusionXLPipeline.from_pretrained(
85-
"Lykon/dreamshaper-xl-1-0", torch_dtype=torch.float16, variant="fp16"
89+
pipeline = FluxPipeline.from_pretrained(
90+
"black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16
8691
).to("cuda")
87-
88-
# download LoRA weights
89-
!wget https://civitai.com/api/download/models/168776 -O blueprintify.safetensors
90-
91-
# load LoRA weights
92-
pipeline.load_lora_weights(".", weight_name="blueprintify.safetensors")
93-
prompt = "bl3uprint, a highly detailed blueprint of the empire state building, explaining how to build all parts, many txt, blueprint grid backdrop"
94-
negative_prompt = "lowres, cropped, worst quality, low quality, normal quality, artifacts, signature, watermark, username, blurry, more than one bridge, bad architecture"
95-
96-
image = pipeline(
97-
prompt=prompt,
98-
negative_prompt=negative_prompt,
99-
generator=torch.manual_seed(0),
100-
).images[0]
101-
image
92+
pipeline.load_lora_weights("linoyts/yarn_art_Flux_LoRA")
93+
pipeline.save_lora_weights(
94+
transformer_lora_adapter_metadata={"r": 16, "lora_alpha": 16},
95+
text_encoder_lora_adapter_metadata={"r": 8, "lora_alpha": 8}
96+
)
10297
```
10398

104-
<div class="flex justify-center">
105-
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/blueprint-lora.png"/>
106-
</div>
107-
10899
### ckpt
109100

110101
> [!WARNING]

0 commit comments

Comments
 (0)