Skip to content

Fix Zamba2MambaMixer ignoring use_mamba_kernels=False#44853

Merged
Cyrilvallez merged 7 commits intohuggingface:mainfrom
sergiopaniego:nemotron-mamba-kernel
Apr 20, 2026
Merged

Fix Zamba2MambaMixer ignoring use_mamba_kernels=False#44853
Cyrilvallez merged 7 commits intohuggingface:mainfrom
sergiopaniego:nemotron-mamba-kernel

Conversation

@sergiopaniego
Copy link
Copy Markdown
Member

What does this PR do?

Zamba2MambaMixer.__init__ calls lazy_load_kernel("mamba-ssm") and lazy_load_kernel("causal-conv1d") unconditionally. Models that inherit from it (like NemotronH) and set use_mamba_kernels=False in their config have the flag ignored, causing failures when the kernels package is installed but causal-conv1d CUDA kernels are not available.

Fix: Gate the lazy_load_kernel calls behind getattr(config, "use_mamba_kernels", True) in the Zamba2 modular.

Related to: huggingface/trl#5278

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

@ArthurZucker

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Copy Markdown
Member

@albertvillanova albertvillanova left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing the underlying issue! 🤗

Just one question: the attribute use_mamba_kernels is only defined in NemotronHConfig, why do you use it in zamba2?

Additionally, you modified modeling_nemotron_h.py, but, as explained in that file header, that file was automatically generated from modular_nemotron_h.py. I think you should modify modular_nemotron_h.py instead.

CC: some maintainers that modified these files recently are @ydshieh, @Cyrilvallez

@sergiopaniego
Copy link
Copy Markdown
Member Author

Just one question: the attribute use_mamba_kernels is only defined in NemotronHConfig, why do you use it in zamba2?

The problem comes fork Zamba2MambaMixer.__init__, which calls lazy_load_kernel unc onditionally.NemotronHMamba2Mixer calls super().__init__(), so the kernel loading already happens before NemotronH's own __init__ runs 😓

Additionally, you modified modeling_nemotron_h.py, but, as explained in that file header, that file was automatically generated from modular_nemotron_h.py. I think you should modify modular_nemotron_h.py instead.

Same idea as before

@albertvillanova
Copy link
Copy Markdown
Member

Thanks for the fixing commit 93840ed: more clear now! 🤗

Let's wait for the opinion of some maintainers.

@github-actions
Copy link
Copy Markdown
Contributor

[For maintainers] Suggested jobs to run (before merge)

run-slow: nemotron_h, zamba2

@Cyrilvallez
Copy link
Copy Markdown
Member

causing failures when the kernels package is installed but causal-conv1d CUDA kernels are not available.

In this case, lazy_load_kernel should just grab it from hub no? Looking at lazy_load_kernel, I don't really see how it can crash?

@sergiopaniego
Copy link
Copy Markdown
Member Author

it downloads it from the hub but the current version is not compatible with transformers (check here).
This solution is defensive against that. Updating it on kernels would also resolve the problem.

cc @danieldk

Copy link
Copy Markdown
Member

@Cyrilvallez Cyrilvallez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ha I see! Thanks!

@Cyrilvallez Cyrilvallez added this pull request to the merge queue Apr 20, 2026
Merged via the queue into huggingface:main with commit 2fd2618 Apr 20, 2026
21 checks passed
@sergiopaniego sergiopaniego deleted the nemotron-mamba-kernel branch April 20, 2026 08:28
lvliang-intel pushed a commit to lvliang-intel/transformers that referenced this pull request Apr 21, 2026
)

* Fix NemotronH ignoring use_mamba_kernels=False

* Move to zamba2

* moved to config
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants