Skip to content

Conversation

@gaopengff
Copy link
Contributor

@gaopengff gaopengff commented Sep 29, 2025

  1. Add intel_xpu backend for llama4.
  2. Do not use get_device_capability result for xpu since it's not supported.
    cc @merrymercy @Ying1123 @zhyncs @hnyls2002

@gaopengff gaopengff changed the title Add 'intel_xpu' backend for sglang Add 'intel_xpu' attention backend for sglang Sep 29, 2025
@gaopengff gaopengff changed the title Add 'intel_xpu' attention backend for sglang Add 'intel_xpu' attention backend for llama4 Oct 21, 2025
@gaopengff gaopengff marked this pull request as ready for review October 21, 2025 05:53
@gaopengff gaopengff changed the title Add 'intel_xpu' attention backend for llama4 [Intel]Add 'intel_xpu' attention backend for llama4 Oct 22, 2025
@gaopengff gaopengff marked this pull request as draft October 23, 2025 06:45
@gaopengff gaopengff marked this pull request as ready for review October 23, 2025 06:46
@gaopengff gaopengff requested a review from mingfeima October 29, 2025 07:35
@gaopengff
Copy link
Contributor Author

@zhyncs Could we merge it as the CI failure seems irrelevant to this PR's change? It should only affect behavior with intel xpu.

@gaopengff
Copy link
Contributor Author

@merrymercy @zhyncs @hnyls2002 Could you help review this PR and merge it as the CI failure seems irrelevant to this PR's change? It should only affect behavior with intel xpu.

@Fridge003 Fridge003 merged commit 4cc725a into sgl-project:main Nov 6, 2025
67 of 76 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants