Original Source: catvton-flux. I implemented their try-off inference code as ComfyUI nodes There's a sample workflow in Workflow that uses SegFormer to generate the mask for you. I highly recommend this approach. Alternatively you can provide your own!
Please note, that this was tested with a 4080, and it's quite slow. You'll want a 4090 or better for performant execution as of right now.
This uses diffusers>=0.32.0 (soon to be 0.32.2).
- This is presently incompatible with Flux fp8 single file.
- Please follow the instructions below and use the HuggingFace FLUX.1 dev process below.
- I'm working on alternatives to this
After heavy experimenting with Try-on, it's nice to have a Try-Off, xiaozaa/cat-tryoff-flux model to work with.
The cat-try-off-flux model will download automatically. The Flux.1-dev model requires some effort.
- Go to huggingface
- Go to your settings and generate a 'write' token
- Go to https://huggingface.co/black-forest-labs/FLUX.1-dev and accept the terms
- Open a prompt, go to your ComfyUI installation and do the following
Windows
SET HF_TOKEN=<token_from_above>
SET HUGGING_FACE_HUB_TOKEN=<token_from_above>
Linux
EXPORT HF_TOKEN=<token_from_above>
EXPORT HUGGING_FACE_HUB_TOKEN=<token_from_above>
Finally, download FLUX.1
cd ./models/checkpoints
git lfs install
git clone https://huggingface.co/black-forest-labs/FLUX.1-dev
And run
cd ../..
python ./main.py
Quantized. 8Bit. Please note, it was definitely slower to generate. 4 Bit and mixed are untested.
- Multi-gpu testing
- Optimize, optimize, optimize.
- Allow additional models
- Formatting/consistency
- TryOn