Check KV4 compatibility with attention backends and add KV4 support to the attention_backend doc #14467
+105
−24
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation
Prevent users from using KV4 with incompatible attention backends by clearly documenting supported backends and enforcing runtime checks.
Improves reliability and reduces runtime errors by ensuring users cannot accidentally run KV4 with unsupported attention backends.
Modifications
Ensure code executes after default settings are set by placing it in model_runner.py instead of server_args.py.
Description / Changes:
1. Backend documentation updates
• Added FP4 KV cache column to the MLA/MHA backend table.
• Clarifies which backend combinations are supported with FP4 KV caches.
2. ModelRunner updates
• Added _handle_kv4_compatibility() to check KV4 compatibility with attention backends at runtime.
• Logs warnings for potential edge-case incompatibilities.
• Adds assertions to enforce correct decode_attention_backend for FA4 + MLA/MHA and non-FA4 + MLA/MHA setups.
• Raises an error if KV4 is used on non-CUDA platforms.
Testing
The compatibility results are tested on B200 (sm100), using Qwen3-235B-A22B as MHA and DeepSeek-R1-0528-FP4 as MLA to test.
Next (WIP)
Test kv4 with fa3 and flashmla backend on sm90 to complete the table. I will send another PR for this.
Checklist