This project implements a custom image-to-image style transfer pipeline that blends the style of one image (Image A) into the structure of another image (Image B).
We just added canny to this work by Nathan Shipley, where the fusion of style and structure creates artistic visual outputs. It's an easy edut
We will release the codes of the version leveraging ZenCtrl architecture.
- Style-Structure Fusion: Seamlessly transfers style from Image A into the spatial geometry of Image B.
- Model-Driven Pipeline: No UI dependencies; powered entirely through locally executed Python scripts.
- Modular: Easily plug in other models or replace components (ControlNet, encoders, etc.).
-
Inputs:
- Image A: Style reference
- Image B: Structural reference
-
Structural Conditioning:
- Canny Edge Map of Image B
- Depth Map via a pre-trained DepthAnything model
-
Style Conditioning:
- Style prompts or embeddings extracted from Image A via a CLIP/T5/BLIP2 encoder
-
Generation Phase:
- A diffusion model (e.g., Flux + Canny) is used
- Flux-style injection merges the style and structure via guided conditioning
- Output image retains Image B’s layout but adopts Image A’s artistic features
- Install dependencies
pip install -r requirements.txt
- Run generation
gradio app.py
- AI-powered visual storytelling
- Concept art and virtual scene design
- Artistic remapping of real-world photos
- Ad creative generation
- Nathan Shipley's work for the idea spark
- Hugging Face models:
If you enjoyed this project, you may also like ZenCtrl, our open-source agentic visual control toolkit for generative image pipelines that we are developing.
ZenCtrl can be combined with this style transfer project to introduce additional layers of control, allowing for more refined composition before or after stylization. It’s especially useful when working with structured scenes, human subjects, or product imagery.
With ZenCtrl, we aim to:
- Chain together preprocessing, control, editing, and postprocessing modules
- Create workflows for tasks like product photography, try-on, background swaps, and face editing
- Use control adapters like canny, depth, pose, segmentation, and more
- Easily integrate with APIs or run it in a Hugging Face Space
Whether you're refining structure by changing the background layout before stylization or editing the results afterward, ZenCtrl gives you full compositional control across the image generation stack.
👉 Explore ZenCtrl on GitHub 👉 Try the ZenCtrl Demo on Hugging Face Spaces
Want to collaborate or learn more? Reach out via GitHub or drop us a message!