Skip to content

[5/n][trainer] feat: flowgrpo trainer#5951

Open
zhtmike wants to merge 15 commits intoverl-project:mainfrom
zhtmike:trainer-pr
Open

[5/n][trainer] feat: flowgrpo trainer#5951
zhtmike wants to merge 15 commits intoverl-project:mainfrom
zhtmike:trainer-pr

Conversation

@zhtmike
Copy link
Copy Markdown
Contributor

@zhtmike zhtmike commented Apr 10, 2026

What does this PR do?

Last piece of puzzle to make flowgrpo trainer runnable :) Following #5297, co-worked with @AndyZhou952

Added:

  • Adds FlowGRPO with a diffusion-oriented RL trainer and entrypoint.
  • Groups diffusion algorithms and metric helpers under the diffusion trainer.
  • Updates diffusion trainer, algorithm, and rollout configuration.
  • Adds example FlowGRPO scripts and OCR-style image data preparation.
  • Adds diffusion CPU tests, an E2E FlowGRPO diffusers run with dummy data.
  • Applies small fixes / enhancements across agent loops, workers, dataset loading, vLLM-omni rollout, etc.
  • Adjusts the vLLM omni workflow, PR template, and sanity checks.

Documentations will be provided in next PR.
Script of full model RL / RL with non-collated reward model will be provided in the next PR.

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward, fully_async, one_step_off
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

Formal installation guide will be in next PR.
For now, you need to

  1. install vllm==0.18
  2. install vllm-omni==0.18
  3. install this branch
  4. prepare the dataset following examples/flowgrpo_trainer/data_process/qwenimage_ocr.py
  5. run examples/flowgrpo_trainer/run_qwen_image_ocr_lora.sh to have a try.

This is wandb result running examples/flowgrpo_trainer/run_qwen_image_ocr_lora.sh with trainer.val_before_train=True

Val Score:
螢幕截圖 2026-04-14 下午2 11 35

Critic:
螢幕截圖 2026-04-14 下午2 11 08

Actor:
螢幕截圖 2026-04-14 下午2 11 53

Performance
螢幕截圖 2026-04-14 下午2 12 09

Visualization
螢幕截圖 2026-04-14 下午2 12 33

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the FlowGRPO trainer for diffusion models, enabling reinforcement learning for image generation tasks such as OCR. Key additions include the RayFlowGRPOTrainer, diffusion-specific advantage estimation logic, and updated dataset handling for multimodal inputs and negative prompts. The PR also refines the vLLM-Omni integration and provides comprehensive end-to-end testing scripts. A logic error was identified in the advantage calculation for single-sample groups, where the advantage should be zeroed out to prevent incorrect gradient updates.

@zhtmike
Copy link
Copy Markdown
Contributor Author

zhtmike commented Apr 10, 2026

btw the current diffusion yaml is too clumsy and contains too much unnecessary llm configs, will refactor this after this one.

# extra configs for algorithm specific features.
# Model-specific diffusion sampling params (e.g. true_cfg_scale, guidance_scale,
# max_sequence_length, noise_level)
extra_configs: {}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we set noise_level: 0.0 here to easier modification?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forward-based RL algo like diffusionnft does not have this parameter. So it is not a universal parameter for rollout config

Comment on lines +27 to +29
# Model-specific diffusion sampling params (e.g. true_cfg_scale, guidance_scale,
# max_sequence_length, noise_level)
extra_configs: {}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw, it's better to the whole list of arguments that are configured via extra_configs, either in comments and official docs

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. See below.

seed: 42

# extra configs for algorithm specific features during validation.
extra_configs: {}
Copy link
Copy Markdown
Collaborator

@SamitHuang SamitHuang Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here. pls state what arguments can be set in extra_config of inference engine

Copy link
Copy Markdown
Contributor Author

@zhtmike zhtmike Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. extra_configs is a temporary design and will be dropped.

I prefer not to set the algorithm-specific configs (e.g., noise level, sde type, etc) or model-specific configs (maximum model length, true cfg scale, etc) into the general rollout config directly, which will make config bloated. So the extra_configs is a temp place to place these args here, which directly feeds these args into the rollout pipeline.

I am working on the config refactoring to make these things clearer. In the coming PR

@SamitHuang
Copy link
Copy Markdown
Collaborator

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the FlowGRPO trainer for diffusion models, specifically targeting image generation tasks such as Qwen-Image. Key additions include a new Ray-based trainer (RayFlowGRPOTrainer), specialized diffusion advantage estimators, and metric utilities for image generation. The PR also refactors the RL dataset to support negative prompts and multi-modal data handling, updates the FSDP engine for diffusers, and adds comprehensive e2e tests and OCR preprocessing examples. Review feedback highlighted a critical issue with method forwarding in the FSDP wrapper's disable_adapter context manager and a logic error in the GRPO advantage calculation regarding standard deviation computation over expanded timesteps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants