Align tiny-Glm4MoeForCausalLM with GLM-4.5 reference config#5638
Align tiny-Glm4MoeForCausalLM with GLM-4.5 reference config#5638qgallouedec wants to merge 3 commits intonew-tiny-model-generationfrom
Conversation
|
|
||
| MODEL_REVISIONS = { | ||
| # Add model_id: revision mappings here to test PRs | ||
| "trl-internal-testing/tiny-Glm4MoeForCausalLM": "refs/pr/1", |
There was a problem hiding this comment.
Temporary MODEL_REVISIONS entry left in conftest
High Severity
The MODEL_REVISIONS dict contains a temporary entry "trl-internal-testing/tiny-Glm4MoeForCausalLM": "refs/pr/1" that is meant for CI testing only. The comments directly above (lines 25–29) document a 4-step workflow where step 4 is "Remove the entry from this dict and commit." If this PR is merged as-is, all tests loading this model will permanently pull from refs/pr/1 instead of the main branch, which is fragile and incorrect once the Hub PR is merged.
Reviewed by Cursor Bugbot for commit 540502a. Configure here.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 540502a8d3
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
|
||
| MODEL_REVISIONS = { | ||
| # Add model_id: revision mappings here to test PRs | ||
| "trl-internal-testing/tiny-Glm4MoeForCausalLM": "refs/pr/1", |
There was a problem hiding this comment.
Avoid pinning tests to a Hub PR ref
MODEL_REVISIONS now forces every from_pretrained("trl-internal-testing/tiny-Glm4MoeForCausalLM") call to use refs/pr/1, which is a temporary Hub PR reference rather than an immutable release revision. Because tests/conftest.py applies this autouse fixture globally, CI (including the full make test jobs in .github/workflows/tests.yml) becomes dependent on that PR ref staying available and unchanged; if the Hub PR is closed/rebased/removed, unrelated test runs will start failing. This should be removed after the model PR merge or replaced with a stable revision pin.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 2 total unresolved issues (including 1 from previous review).
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 49d5fca. Configure here.
| generation_config = GenerationConfig.from_pretrained(MODEL_ID) | ||
| config = Glm4MoeConfig( | ||
| vocab_size=len(tokenizer.vocab), | ||
| vocab_size=151365, |
There was a problem hiding this comment.
Hardcoded vocab_size deviates from pattern and likely incorrect
Medium Severity
vocab_size=151365 is hardcoded instead of using len(tokenizer.vocab) like every other tiny model generation script in the repository. The PR description's "after" diff output shows vocab_size is no longer a difference between the reference (151552) and the tiny model — implying the value was expected to match the reference. However, 151365 doesn't match 151552, suggesting the hardcoded value is stale. Using len(tokenizer.vocab) would dynamically produce the correct value and stay consistent with all sibling scripts.
Reviewed by Cursor Bugbot for commit 49d5fca. Configure here.


What does this PR do?
On top of #5637
before:
after
Before submitting
AI writing disclosure
We welcome the use of AI tools to help with contributions. For transparency and to help us improve our review process, please indicate the level of AI involvement in this PR.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
Note
Medium Risk
Moderate risk because it changes model configuration (special token IDs/rope/QK norm/MoE params) and removes version gating in several tests, which could cause tokenizer/model loading failures in older
transformersenvironments.Overview
Aligns the generated
tiny-Glm4MoeForCausalLMconfig with the GLM-4.5 reference by hardcoding the expectedvocab_sizeand adding missing architecture/tokenization-related fields (e.g.eos_token_id/pad_token_id,rope_theta,use_qk_norm, MoE sizing).Updates test infrastructure to pull
trl-internal-testing/tiny-Glm4MoeForCausalLMfromrefs/pr/1and removes thetransformers>=5.0.0skipifguards so GLM4 MoE participates in chat-template/data-utils/SFT training test matrices.Reviewed by Cursor Bugbot for commit 49d5fca. Bugbot is set up for automated code reviews on this repo. Configure here.