Skip to content

LoRA Fine-Tuning Script for Single-GPU (24 GB) Setups #115

@joeyu930

Description

@joeyu930

Hi,

I’d like to fine-tune StableAnimator on a single RTX 3090 Ti (24 GB VRAM). The README notes that full fine-tuning currently requires around 40 GB of VRAM, so I’m writing to ask:

  1. Is there already a LoRA / low-rank adaptation fine-tuning script or branch available?

  2. If not yet public, could you share any work-in-progress implementation or an estimated release plan?

Thank you for the excellent work—looking forward to your reply!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions