Skip to content

[veomni] feat: bump veomni to v0.1.8#5900

Merged
wuxibin89 merged 1 commit intoverl-project:mainfrom
deerlu:di/feat-verl-veomni-qwen35vl
Apr 15, 2026
Merged

[veomni] feat: bump veomni to v0.1.8#5900
wuxibin89 merged 1 commit intoverl-project:mainfrom
deerlu:di/feat-verl-veomni-qwen35vl

Conversation

@deerlu
Copy link
Copy Markdown
Collaborator

@deerlu deerlu commented Apr 7, 2026

What does this PR do?

Bump veomni to v0.1.8

  • Fix parallel_state init parameter (ep_size -> extra_parallel_sizes) and use set type for basic_modules
  • Automatically rewrites flash_attention_2/3/4 to VeOmni SP-aware variants
  • Add _prepare_veomni_flash_attention_kwargs to precompute cu_seq_lens for packed sequences
  • Slice position_ids when sp_enabled

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward, fully_async, one_step_off
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Flash Attention in the VeOmni engine by adding a utility to derive sequence length metadata from packed position IDs. It also refactors the parallel configuration to use a tuple for extra parallel sizes and improves the deduplication of basic modules for FSDP. Feedback was provided to ensure that the generated sequence length tensors are moved to the same device as the input position IDs to avoid potential device mismatch errors.

Comment thread verl/workers/engine/veomni/transformer_impl.py
wuxibin89
wuxibin89 previously approved these changes Apr 7, 2026
@wuxibin89
Copy link
Copy Markdown
Collaborator

@deerlu Do use need to upgrade veomni in ci? It's still use ByteDance-Seed/VeOmni.git@v0.1.4

@deerlu deerlu force-pushed the di/feat-verl-veomni-qwen35vl branch from 6805ac7 to c814128 Compare April 9, 2026 08:52
@deerlu
Copy link
Copy Markdown
Collaborator Author

deerlu commented Apr 9, 2026

@deerlu Do use need to upgrade veomni in ci? It's still use ByteDance-Seed/VeOmni.git@v0.1.4

done, bumped to v0.1.8 (released today)

@deerlu deerlu force-pushed the di/feat-verl-veomni-qwen35vl branch 2 times, most recently from a104340 to 13194fc Compare April 9, 2026 12:42
@deerlu deerlu marked this pull request as draft April 9, 2026 12:43
@deerlu deerlu force-pushed the di/feat-verl-veomni-qwen35vl branch 5 times, most recently from 4bad417 to 1f96d92 Compare April 15, 2026 06:02
@deerlu deerlu changed the title [veomni] fix: improve VeOmniEngine and add flash attention kwargs support [veomni] feat: bump veomni to v0.1.8 Apr 15, 2026
- Fix parallel_state init parameter (ep_size -> extra_parallel_sizes) and use set type for basic_modules
- Automatically rewrites `flash_attention_2/3/4` to VeOmni SP-aware variants
- Add _prepare_veomni_flash_attention_kwargs to precompute cu_seq_lens for packed sequences
- Slice position_ids when sp_enabled
@deerlu deerlu force-pushed the di/feat-verl-veomni-qwen35vl branch from 1f96d92 to 05c4964 Compare April 15, 2026 07:21
@wuxibin89 wuxibin89 marked this pull request as ready for review April 15, 2026 09:05
@wuxibin89 wuxibin89 merged commit 59f53cc into verl-project:main Apr 15, 2026
109 of 173 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants