Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The huggingface repo need to be fixed for Sana 2K and 4K models #10634

Open
nitinmukesh opened this issue Jan 23, 2025 · 1 comment
Open

The huggingface repo need to be fixed for Sana 2K and 4K models #10634

nitinmukesh opened this issue Jan 23, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@nitinmukesh
Copy link

nitinmukesh commented Jan 23, 2025

Describe the bug

Hello @lawrence-cj ,

I am using Sana using diffusers. The issue is applicable for both these repos and maybe for 512/1024 but not tested.

if inference_type == "Sana 4K":
    model_path = "Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers"
else:
    model_path = "Efficient-Large-Model/Sana_1600M_2Kpx_BF16_diffusers"

pipe_sana = SanaPipeline.from_pretrained(
    pretrained_model_name_or_path=model_path,
    variant="bf16",
    torch_dtype=torch.bfloat16,
    use_safetensors=True,
)  

When I specify bf16 and use_safetensors it should only download bf16 models and not 32 bit. It is working fine for text_encoder and vae but not for transformer.

C:\Users\nitin\.cache\huggingface\hub\models--Efficient-Large-Model--Sana_1600M_2Kpx_BF16_diffusers\snapshots\c096bbd4f6da0daf181f4fbce5e7505051b8c75c>tree /F
Folder PATH listing for volume Windows-SSD
Volume serial number is CE9F-A6AE
C:.
│   model_index.json
│
├───scheduler
│       scheduler_config.json
│
├───text_encoder
│       config.json
│       model.bf16-00001-of-00002.safetensors
│       model.bf16-00002-of-00002.safetensors
│       model.safetensors.index.bf16.json
│
├───tokenizer
│       special_tokens_map.json
│       tokenizer.json
│       tokenizer.model
│       tokenizer_config.json
│
├───transformer
│       config.json
│       diffusion_pytorch_model-00001-of-00002.safetensors
│       diffusion_pytorch_model-00002-of-00002.safetensors
│       diffusion_pytorch_model.bf16.safetensors
│       diffusion_pytorch_model.safetensors.index.json
│
└───vae
        config.json
        diffusion_pytorch_model.bf16.safetensors

Reproduction

from diffusers import SanaPipeline
model_path = "Efficient-Large-Model/Sana_1600M_2Kpx_BF16_diffusers"
pipe_sana = SanaPipeline.from_pretrained(
        pretrained_model_name_or_path=model_path,
        variant="bf16",
        torch_dtype=torch.bfloat16,
        use_safetensors=True,
    )
pipe_sana.to("cuda")
pipe_sana.vae.to(torch.bfloat16)
pipe_sana.text_encoder.to(torch.bfloat16)

Logs

Message during inference


A mixture of bf16 and non-bf16 filenames will be loaded.
Loaded bf16 filenames:
[transformer/diffusion_pytorch_model.bf16.safetensors, text_encoder/model.bf16-00001-of-00002.safetensors, text_encoder/model.bf16-00002-of-00002.safetensors, vae/diffusion_pytorch_model.bf16.safetensors]
Loaded non-bf16 filenames:
[transformer/diffusion_pytorch_model-00001-of-00002.safetensors, transformer/diffusion_pytorch_model-00002-of-00002.safetensors
If this behavior is not expected, please check your folder structure.

System Info

Not needed as this is huggingface repo setup issue

Who can help?

@lawrence-cj

@nitinmukesh nitinmukesh added the bug Something isn't working label Jan 23, 2025
@DN6
Copy link
Collaborator

DN6 commented Jan 24, 2025

This is a bug in how we're fetching variants. Will take a look.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants