-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FluxPipeline is not working with GGUF :( #10674
Comments
Tested the same pipeline without GGUF and with int8wo Both these works |
Just hangs (venv) C:\aiOWN\diffuser_webui>python Flex1alpha-gguf.py |
@nitinmukesh transformer = FluxTransformer2DModel.from_single_file(
transformer_path,
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=dtype,
config=model_id
) |
Sorry for the delay in trying the solution suggested. Thank you for sharing the updated code and explaining the reason of issue. I updated the code as suggested and getting following error No config file here
|
Describe the bug
cpu offload is not working for Flux-GGUF, Works fine for AuraFlow-GGUF pipeline.
Reproduction
Logs
System Info
Who can help?
No response
The text was updated successfully, but these errors were encountered: