Skip to content

Inference code of "Golden Noise for Diffusion Models: A Learning Framework".

Notifications You must be signed in to change notification settings

Klayand/Golden-Noise-for-Diffusion-Models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

NPNet Pipeline Usage Guide😄

Overview

This guide provides instructions on how to use the NPNet, a noise prompt network aims to transform the random Gaussian noise into golden noise, by adding a small desirable perturbation derived from the text prompt to boost the overall quality and semantic faithfulness of the synthesized images.

Here we provide the inference code which supports different models like Stable Diffusion XL, DreamShaper-xl-v2-turbo, and Hunyuan-DiT.

Requirements

  • python version == 3.8
  • pytorch with cuda version
  • diffusers
  • PIL
  • numpy
  • timm
  • argparse
  • einops

Installation🚀️

Make sure you have successfully built python environment and installed pytorch with cuda version. Before running the script, ensure you have all the required packages installed. You can install them using:

pip install diffusers, PIL, numpy, timm, argparse, einops

Usage👀️

To use the NPNet pipeline, you need to run the npnet_pipeline.py script with appropriate command-line arguments. Below are the available options:

Command-Line Arguments

  • --pipeline: Select the model pipeline (SDXL, DreamShaper, DiT). Default is SDXL.
  • --prompt: The textual prompt based on which the image will be generated. Default is "A banana on the left of an apple."
  • --inference-step: Number of inference steps for the diffusion process. Default is 50.
  • --cfg: Classifier-free guidance scale. Default is 5.5.
  • --pretrained-path: Path to the pretrained model weights. Default is a specified path in the script.
  • --size: The size (height and width) of the generated image. Default is 1024.

Running the Script

Run the script from the command line by navigating to the directory containing npnet_pipeline.py and executing:

python npnet_pipeline.py --pipeline SDXL --prompt "A banana on the left of an apple." --size 1024

This command will generate an image based on the prompt "A scenic view of the Himalayas" using the Stable Diffusion XL model with an image size of 1024x1024 pixels.

Output🎉️

The script will save two images:

  • A standard image generated by the diffusion model.
  • A golden image generated by the diffusion model with the NPNet.

Both images will be saved in the current directory with names based on the model and prompt.

Pre-trained Weights Download❤️

We provide the pre-trained NPNet weights of Stable Diffusion XL, DreamShaper-xl-v2-turbo, and Hunyuan-DiT with an anonymous link

Citation:

If you find our code useful for your research, please cite our paper.

@misc{zhou2024goldennoisediffusionmodels,
      title={Golden Noise for Diffusion Models: A Learning Framework}, 
      author={Zikai Zhou and Shitong Shao and Lichen Bai and Zhiqiang Xu and Bo Han and Zeke Xie},
      year={2024},
      eprint={2411.09502},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2411.09502}, 
}

About

Inference code of "Golden Noise for Diffusion Models: A Learning Framework".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages