Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting OOM on 4090 #4

Open
kdcyberdude opened this issue Jan 25, 2025 · 4 comments
Open

Getting OOM on 4090 #4

kdcyberdude opened this issue Jan 25, 2025 · 4 comments

Comments

@kdcyberdude
Copy link

At FluxFillModelLoader Node; while loading the FluxFill model... I am getting OOM.
Running the same script directly in python also results in OOM.

@asutermo
Copy link
Owner

What are your computer specs? The base implementation of catvton flux used an 80GB a100. I was able to get it to work (slowly) on a 4080.

@kdcyberdude
Copy link
Author

kdcyberdude commented Jan 25, 2025

@asutermo I have a server grade system with multiple 4090's. The TryOff.json workflow just giving me OOM.

01:58:07.953 [Warning] [ComfyUI-0/STDERR] Traceback (most recent call last):
01:58:07.953 [Warning] [ComfyUI-0/STDERR]   File "/wdc/luxe/SwarmUI/dlbackend/ComfyUI/execution.py", line 327, in execute
01:58:07.953 [Warning] [ComfyUI-0/STDERR]     output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
01:58:07.953 [Warning] [ComfyUI-0/STDERR]                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
01:58:07.953 [Warning] [ComfyUI-0/STDERR]   File "/wdc/luxe/SwarmUI/dlbackend/ComfyUI/execution.py", line 202, in get_output_data
01:58:07.953 [Warning] [ComfyUI-0/STDERR]     return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
01:58:07.953 [Warning] [ComfyUI-0/STDERR]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
01:58:07.953 [Warning] [ComfyUI-0/STDERR]   File "/wdc/luxe/SwarmUI/dlbackend/ComfyUI/execution.py", line 174, in _map_node_over_list
01:58:07.953 [Warning] [ComfyUI-0/STDERR]     process_inputs(input_dict, i)
01:58:07.953 [Warning] [ComfyUI-0/STDERR]   File "/wdc/luxe/SwarmUI/dlbackend/ComfyUI/execution.py", line 163, in process_inputs
01:58:07.953 [Warning] [ComfyUI-0/STDERR]     results.append(getattr(obj, func)(**inputs))
01:58:07.953 [Warning] [ComfyUI-0/STDERR]                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
01:58:07.953 [Warning] [ComfyUI-0/STDERR]   File "/wdc/luxe/SwarmUI/dlbackend/ComfyUI/custom_nodes/ComfyUI-Flux-TryOff/try_off_nodes.py", line 32, in load_model
01:58:07.953 [Warning] [ComfyUI-0/STDERR]     model = FluxTransformer2DModel.from_pretrained(model_name, torch_dtype=torch.bfloat16).to(device)
01:58:07.954 [Warning] [ComfyUI-0/STDERR]             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
01:58:07.954 [Warning] [ComfyUI-0/STDERR]   File "/home/kd/anaconda3/envs/flux/lib/python3.11/site-packages/diffusers/models/modeling_utils.py", line 1191, in to
01:58:07.954 [Warning] [ComfyUI-0/STDERR]     return super().to(*args, **kwargs)
01:58:07.954 [Warning] [ComfyUI-0/STDERR]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
01:58:07.954 [Warning] [ComfyUI-0/STDERR]   File "/home/kd/anaconda3/envs/flux/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1340, in to
01:58:07.954 [Warning] [ComfyUI-0/STDERR]     return self._apply(convert)
01:58:07.954 [Warning] [ComfyUI-0/STDERR]            ^^^^^^^^^^^^^^^^^^^^
01:58:07.954 [Warning] [ComfyUI-0/STDERR]   File "/home/kd/anaconda3/envs/flux/lib/python3.11/site-packages/torch/nn/modules/module.py", line 900, in _apply
01:58:07.954 [Warning] [ComfyUI-0/STDERR]     module._apply(fn)
01:58:07.954 [Warning] [ComfyUI-0/STDERR]   File "/home/kd/anaconda3/envs/flux/lib/python3.11/site-packages/torch/nn/modules/module.py", line 900, in _apply
01:58:07.954 [Warning] [ComfyUI-0/STDERR]     module._apply(fn)
01:58:07.954 [Warning] [ComfyUI-0/STDERR]   File "/home/kd/anaconda3/envs/flux/lib/python3.11/site-packages/torch/nn/modules/module.py", line 900, in _apply
01:58:07.954 [Warning] [ComfyUI-0/STDERR]     module._apply(fn)
01:58:07.954 [Warning] [ComfyUI-0/STDERR]   File "/home/kd/anaconda3/envs/flux/lib/python3.11/site-packages/torch/nn/modules/module.py", line 927, in _apply
01:58:07.954 [Warning] [ComfyUI-0/STDERR]     param_applied = fn(param)
01:58:07.954 [Warning] [ComfyUI-0/STDERR]                     ^^^^^^^^^
01:58:07.954 [Warning] [ComfyUI-0/STDERR]   File "/home/kd/anaconda3/envs/flux/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1326, in convert
01:58:07.954 [Warning] [ComfyUI-0/STDERR]     return t.to(
01:58:07.954 [Warning] [ComfyUI-0/STDERR]            ^^^^^
01:58:07.955 [Warning] [ComfyUI-0/STDERR] torch.OutOfMemoryError: Allocation on device 

@asutermo
Copy link
Owner

great, thanks for sharing the trace. I'll try and have this fixed soon

@asutermo
Copy link
Owner

I'm looking at options for multi-gpu. I just added quantization support a few minutes ago, but it slows things down. I was able to run 8-bit and regular on my 4080 (with like nothing else running).

FluxTransformer2DModel's device map only allows 'balanced' and I was hitting issues with that (all docs seem to reference using auto, but it's not permitted by FluxTransformer2DModel). I'm still investigating options though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants