Skip to content

Add wrapping inference within InfinityPipeline then support batch inference with multiple prompts #109

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

nqbinhcs
Copy link

@nqbinhcs nqbinhcs commented Apr 26, 2025

Description

This pull request adds the run_infinity_pipeline.py script for batch image generation using the Infinity model (issue #106 ). Key features include:

  • Wrapping as pipeline: Loads the Infinity model, VAE, and text encoder.
  • Batch Inference: Supports multiple prompts with configurable parameters (cfg_scale, tau, seed, top_k, top_p).

Example Usage

Run the script:

python tools/run_infinity_pipeline.py

Core Functionality

pipe = InfinityPipeline.from_pretrained(
    pretrained_model_name_or_path=model_path,
    vae_path=vae_path,
    text_encoder_path=text_encoder_path,
    model_type="infinity_2b",
    device="cuda",
    torch_dtype=torch.bfloat16,
    pn="1M"
)

prompts = [
    "A majestic dragon made of crystal",
    "A close-up photograph of a Corgi dog",
    "A photo of a kangaroo wearing an orange hoodie and blue sunglasses standing on the grass in front of the Sydney Opera House holding a sign on the chest with the text 'Welcome Friends!'"
]

images = pipe(
    prompt=prompts,
    cfg_scale=3.0,
    tau=0.5,
    seed=42,
    top_k=900,
    top_p=0.97
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant