Skip to content

Align tiny-Glm4MoeForCausalLM with GLM-4.5 reference config#5638

Open
qgallouedec wants to merge 3 commits intonew-tiny-model-generationfrom
fix-tiny-glm4-moe
Open

Align tiny-Glm4MoeForCausalLM with GLM-4.5 reference config#5638
qgallouedec wants to merge 3 commits intonew-tiny-model-generationfrom
fix-tiny-glm4-moe

Conversation

@qgallouedec
Copy link
Copy Markdown
Member

@qgallouedec qgallouedec commented Apr 24, 2026

What does this PR do?

On top of #5637

before:

  attention_bias                                   True                               → False
  eos_token_id                                     [151329, 151336, 151338]           → None
  first_k_dense_replace                            3                                  → 1
  head_dim                                         128                                → <missing>
  hidden_size                                      5120                               → 8
  intermediate_size                                12288                              → 32
  moe_intermediate_size                            1536                               → 1408
  n_routed_experts                                 160                                → 4
  num_attention_heads                              96                                 → 4
  num_experts_per_tok                              8                                  → 2
  num_hidden_layers                                92                                 → 2
  num_key_value_heads                              8                                  → 2
  num_nextn_predict_layers                         1                                  → <missing>
  pad_token_id                                     151329                             → None
  rope_theta                                       1000000                            → 10000.0
  routed_scaling_factor                            2.5                                → 1.0
  use_qk_norm                                      True                               → False
  vocab_size                                       151552                             → 151365

after

[config_diff] zai-org/GLM-4.5 vs tiny (10 differences)
  first_k_dense_replace                            3                                  → 1
  head_dim                                         128                                → 2
  hidden_size                                      5120                               → 8
  intermediate_size                                12288                              → 32
  moe_intermediate_size                            1536                               → 32
  n_routed_experts                                 160                                → 4
  num_attention_heads                              96                                 → 4
  num_experts_per_tok                              8                                  → 2
  num_hidden_layers                                92                                 → 2
  num_key_value_heads                              8                                  → 2

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline, Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link to it if that's the case.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

AI writing disclosure

We welcome the use of AI tools to help with contributions. For transparency and to help us improve our review process, please indicate the level of AI involvement in this PR.

  • No AI usage: the PR was written entirely by a human.
  • AI-assisted: some parts were suggested or improved by AI, but the PR was written and reviewed by a human.
  • AI-generated: the PR was mostly or fully generated by an AI tool.

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.


Note

Medium Risk
Moderate risk because it changes model configuration (special token IDs/rope/QK norm/MoE params) and removes version gating in several tests, which could cause tokenizer/model loading failures in older transformers environments.

Overview
Aligns the generated tiny-Glm4MoeForCausalLM config with the GLM-4.5 reference by hardcoding the expected vocab_size and adding missing architecture/tokenization-related fields (e.g. eos_token_id/pad_token_id, rope_theta, use_qk_norm, MoE sizing).

Updates test infrastructure to pull trl-internal-testing/tiny-Glm4MoeForCausalLM from refs/pr/1 and removes the transformers>=5.0.0 skipif guards so GLM4 MoE participates in chat-template/data-utils/SFT training test matrices.

Reviewed by Cursor Bugbot for commit 49d5fca. Bugbot is set up for automated code reviews on this repo. Configure here.

Comment thread tests/conftest.py

MODEL_REVISIONS = {
# Add model_id: revision mappings here to test PRs
"trl-internal-testing/tiny-Glm4MoeForCausalLM": "refs/pr/1",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Temporary MODEL_REVISIONS entry left in conftest

High Severity

The MODEL_REVISIONS dict contains a temporary entry "trl-internal-testing/tiny-Glm4MoeForCausalLM": "refs/pr/1" that is meant for CI testing only. The comments directly above (lines 25–29) document a 4-step workflow where step 4 is "Remove the entry from this dict and commit." If this PR is merged as-is, all tests loading this model will permanently pull from refs/pr/1 instead of the main branch, which is fragile and incorrect once the Hub PR is merged.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 540502a. Configure here.

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 540502a8d3

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread tests/conftest.py

MODEL_REVISIONS = {
# Add model_id: revision mappings here to test PRs
"trl-internal-testing/tiny-Glm4MoeForCausalLM": "refs/pr/1",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid pinning tests to a Hub PR ref

MODEL_REVISIONS now forces every from_pretrained("trl-internal-testing/tiny-Glm4MoeForCausalLM") call to use refs/pr/1, which is a temporary Hub PR reference rather than an immutable release revision. Because tests/conftest.py applies this autouse fixture globally, CI (including the full make test jobs in .github/workflows/tests.yml) becomes dependent on that PR ref staying available and unchanged; if the Hub PR is closed/rebased/removed, unrelated test runs will start failing. This should be removed after the model PR merge or replaced with a stable revision pin.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

There are 2 total unresolved issues (including 1 from previous review).

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit 49d5fca. Configure here.

generation_config = GenerationConfig.from_pretrained(MODEL_ID)
config = Glm4MoeConfig(
vocab_size=len(tokenizer.vocab),
vocab_size=151365,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded vocab_size deviates from pattern and likely incorrect

Medium Severity

vocab_size=151365 is hardcoded instead of using len(tokenizer.vocab) like every other tiny model generation script in the repository. The PR description's "after" diff output shows vocab_size is no longer a difference between the reference (151552) and the tiny model — implying the value was expected to match the reference. However, 151365 doesn't match 151552, suggesting the hardcoded value is stale. Using len(tokenizer.vocab) would dynamically produce the correct value and stay consistent with all sibling scripts.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 49d5fca. Configure here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants