Skip to content

[WIP] [SmolLM3] Add Backbone and CausalLM #2327

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 64 commits into
base: master
Choose a base branch
from

Conversation

DavidLandup0
Copy link
Collaborator

Description of the change

WIP

Colab Notebook

Checklist

  • I have added all the necessary unit tests for my change.
  • I have verified that my change does not break existing code and works with all backends (TensorFlow, JAX, and PyTorch).
  • My PR is based on the latest changes of the main branch (if unsure, rebase the code).
  • I have followed the Keras Hub Model contribution guidelines in making these changes.
  • I have followed the Keras Hub API design guidelines in making these changes.
  • I have signed the Contributor License Agreement.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @DavidLandup0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request, currently a Work In Progress, introduces core utility functions for the SmolLM3 model. It lays the groundwork for the model's backbone and causal language model components by implementing essential attention mechanisms, including helpers for Rotary Positional Embeddings and Grouped Query Attention, culminating in a comprehensive eager attention forward pass.

Highlights

  • New Utility File: A new file keras_hub/src/models/smollm3/smollm3_utils.py has been added to house foundational utility functions for the SmolLM3 model.
  • Rotary Positional Embeddings (RoPE): Helper functions rotate_half (lines 4-7) and apply_rotary_pos_emb (lines 10-15) are introduced to facilitate the application of Rotary Positional Embeddings to query and key tensors.
  • Grouped Query Attention (GQA) / Multi-Query Attention (MQA): The repeat_kv function (lines 18-25) is added to support Grouped Query Attention (GQA) or Multi-Query Attention (MQA) by repeating key/value states as needed.
  • Eager Attention Mechanism: An eager_attention_forward function (lines 28-52) is implemented, providing a complete attention pass including scaling, causal masking, softmax, and dropout, designed for eager execution.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR introduces utility functions for a new SmolLM3 model. The changes are a work in progress. My review focuses on ensuring correctness, robustness, and maintainability.

I've identified a critical issue in rotate_half that could cause problems on the JAX backend, a high-severity issue in eager_attention_forward regarding the unconditional application of dropout, and a medium-severity issue about the lack of docstrings for the new public functions.

Providing this feedback now should help in finalizing the implementation.

Comment on lines 5 to 7
x1 = x[..., : x.shape[-1] // 2]
x2 = x[..., x.shape[-1] // 2 :]
return ops.concatenate((-x2, x1), axis=-1)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This implementation of rotate_half is susceptible to a known bug with XLA compilation on JAX when using ops.concatenate, as noted in keras_hub/src/layers/modeling/rotary_embedding.py. To ensure backend compatibility and robustness, adopt the safer implementation pattern used in RotaryEmbedding.

Suggested change
x1 = x[..., : x.shape[-1] // 2]
x2 = x[..., x.shape[-1] // 2 :]
return ops.concatenate((-x2, x1), axis=-1)
x1, x2 = ops.split(x, 2, axis=-1)
half_rot_tensor = ops.stack((-x2, x1), axis=-2)
return ops.reshape(half_rot_tensor, ops.shape(x))

value,
attention_mask,
scaling: float,
dropout: float = 0.0,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The eager_attention_forward function should include a training parameter to conditionally apply dropout. This ensures correct behavior during both training and inference.

    dropout: float = 0.0,
    training: bool = False,

attn_weights = ops.add(attn_weights, causal_mask)

attn_weights = ops.softmax(attn_weights, axis=-1)
attn_weights = random.dropout(attn_weights, rate=dropout)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Dropout should only be applied during training to prevent non-deterministic behavior and degraded model performance during inference. Apply dropout conditionally based on the training parameter.

    attn_weights = random.dropout(attn_weights, rate=dropout) if training else attn_weights

@keras-team keras-team deleted a comment from gemini-code-assist bot Jul 16, 2025
Copy link
Member

@mattdangerw mattdangerw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some quick drive by comments, know this is still wip! Looking good so far!

layer_types,
mlp_bias,
rms_norm_epsilon,
layer_norm_epsilon,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually there's some of these terms (like the epsilon's and rope theta) that have a consistent value across all the presets we care about, and we give them defaults here. Not super important, just for people that wanted an easier time making a custom small version of the arch or something like that.

)


def eager_attention_forward(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a hf/transforms-ism we'd rather not inherit. perfer just keeping the code on the layer directly, fine to use a private helper to decompose it a bit more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants