Skip to content

Releases: google/tunix

Tunix v0.1.5 — Critical Issue Fix for v0.1.4

21 Nov 02:19

Choose a tag to compare

API Change

This release fixes a critical issue introduced in v0.1.4 that prevented correct functionality.
Users are strongly recommended to upgrade to v0.1.5.

# old:
rl_trainer = GrpoLearner(
  grpo_config=grpo_config,
)
# new:
rl_trainer = GrpoLearner(
  algo_config=grpo_config,
)

What's Changed

Full Changelog: v0.1.4...v0.1.5

Tunix v0.1.4 — JAX 0.8.1 Flax 0.12.1

20 Nov 05:13

Choose a tag to compare

Highlights

API Changes

# Old:
cluster_config = rl_cluster_lib.ClusterConfig(
    role_to_mesh={
        ...,
    },
    training_config=rl_cluster_lib.RLTrainingConfig(
        ...,
    ),
    rollout_engine=args.rollout_engine,
    rollout_config=base_rollout.RolloutConfig(
        ...,
    ),
    rollout_vllm_model_version=VLLM_MODEL_VERSION,
    ...,
)
# New:
cluster_config = rl_cluster_lib.ClusterConfig(
    role_to_mesh={
        ...,
    },
    training_config=rl_cluster_lib.RLTrainingConfig(
        ...,
    ),
    rollout_engine=args.rollout_engine,
    rollout_config=base_rollout.RolloutConfig(
        ...,
        rollout_vllm_model_version=VLLM_MODEL_VERSION,
        ...,
    ),
)

New Features

Model Support:

  • Added configuration for the Qwen2.5 math-1.5b model.
  • Included mobile fine-tuning examples for Gemma 270M.

SGLang Integration:

  • Introduced an SGLang JAX sampler.
  • Added SGLang JAX mapping for Qwen2 models.
  • Enabled SGLang/JAX CI.

Agentic Workflows:

  • Added ModelAgent and TaskEnvironment for single-turn agentic workflows.
  • Introduced an Agentic GRPOLearner for RL training.
  • Provided a script for GRPO agent mode.
  • Added tests for agentic_grpo_learner.
  • Implemented Agentic GRPO with multi-iteration support and fixes.

Training & Evaluation:

  • Added support for ORPO trainer.
  • Included scripts for OSS math500 evaluation and deepscalar.

Infrastructure:

  • Added Dockerfile and build scripts for Tunix for GKE development.
  • Implemented GitHub Actions workflows for Tunix TPU nightly regression.
  • Added a plugin-type custom logging backend support in MetricsLogger.

Improvements

Model Loading & Configuration:

  • Refactored model loading from Flax Orbax checkpoints, including fixes for Gemma and Gemma2.
  • Refactored gemma modelConfig to explicitly include all models.
  • Relaxed frozen configuration for models.

Performance & Efficiency:

  • Improved speed of safetensor loading.
  • Added per-Python-thread timeline and export of perf metrics to metrics_logger.
  • Rewrote the performance tracer with a new data model.
  • Enabled vLLM Data Parallelism on Tunix.

Architecture & Refactoring:

  • Moved agentic code out of the experimental folder.
  • Moved rollout related configs from cluster config to rollout_config.
  • Updated trajectory engine code.
  • Updated RolloutOrchestrator logic.
  • Implemented a concrete naming structure for parsing HuggingFace model IDs.
  • Updated model module to prevent AttributeError with pytree=false.

Usability:

  • Updated vanilla sampler to accept single strings.
  • Made put_exception in GRPO agentic learner asynchronous.
  • Enabled micro_batch_size for rollout and reference models in the PPO learner.
  • Added support for user-defined rollout engines.
  • Added Kaggle and GitHub buttons to Tunix example notebooks.
  • Improved HBM usage reporting in multi-process SPMD.

Internal:

  • Refactored TPU tests to run separately based on HF_TOKEN requirements.
  • Updated Tunix GitHub Actions to trigger on push to main.
  • Moved Docker files to the root directory.
  • Added backward compatibility for set_mesh.

Bug Fixes

  • Fixed broken CI due to vLLM.
  • Fixed vLLM driver tests.
  • Improved test collection to only include target tests.
  • Fixed a conditional issue in the Tunix Gemma implementation.
  • Fixed nnx.remat usage with bound methods.
  • Fixed the OSS GRPO training script.
  • Fixed Qwen2 mapping for SGLang/JAX.
  • Fixed an incorrect loss type issue.
  • Fixed max_step initialization when profiling.
  • Fixed issues with multiple metrics loggers.
  • Reduced test flakiness.
  • Fixed broken links in README.md.
  • Corrected algo_config naming in GRPOLearner.
  • Fixed the get_logprobs_from_vllm_output utility function.
  • Fixed TypeError in example notebooks by updating mesh indexing (MESH[0] to len(MESH[0])).
  • Addressed a very weird bug. (Details pending)

Documentation

  • Fixed documentation build for ReadTheDocs.
  • Minor fix on grpo_demo description.
  • Added README for SGLang JAX.
  • Updated docstring usage for dataclasses.

Internal & Tooling

  • Automated GitHub issue assignment to all engineers.
  • Converted notebook files (.ipynb) to Python scripts (_nb.py) and removed Jupyter cell markers.
  • Updated debug logging.
  • Pinned Qwix version to 0.1.1 (and later removed the pin).
  • Ensured latest dependencies are installed by forcing reinstall.
  • Temporarily disabled SGLang tests.
  • Removed gcsfs from pyproject.toml dependencies.

Detailed PRs

Read more

Tunix v0.1.3 — JAX 0.8 and new Qwen / Llama3 model support

20 Oct 17:43

Choose a tag to compare

A maintenance and feature release focused on TPU readiness, test hardening, and model additions. Highlights include a JAX upgrade, SFT/CI improvements, new Qwen and Llama3 model variants, and multiple bugfixes across training and distillation tooling.

Highlights

  • Bumped JAX to 0.8.0 for improved compatibility and performance. Jax 0.7.2 has performance degradation on compilation and we are passing over this version.
  • Add vLLM TPU to the dev mode.
  • Qwen2.5 (including 1.5B) and Llama3 (70B & 405B) support added.

What's Changed

Full Changelog: v0.1.2...v0.1.3

Tunix v0.1.2: Expanded Model Support and Enhanced Flexibility

10 Oct 18:14

Choose a tag to compare

This release of Tunix introduces support for new models, enhances core functionalities for more flexible and efficient workflows, and includes several important fixes.

Highlights

  • Expanded Model Support: We've added a configuration for qwen-8b and ported the Llama3 example to the Tunix CLI. Additionally, GRPO disaggregated llama3.1-70b is now supported through MaxText, including checkpoint saving.
  • Enhanced Flexibility: Users can now specify a different data type for the rollout model and take advantage of more flexible PyTree support in the checkpoint manager. This release also introduces flexible collect modes and tokenization support, along with support for multiple EOS tokens in the vanilla sampler.

Other Changes

  • Downgraded Jax version to 0.7.1 in prod mode due to performance regression, dev mode still supports Jax v0.7.2
  • Fixes to the front page pip install command and GRPO examples.
  • Improvements to the checkpoint manager and resharding library.
  • Added a backward compatibility test for Orbax checkpoint restoration.
  • Various code simplifications, refactoring, and documentation updates.

Full Changelog: v0.1.1...v0.1.2

What's Changed

Full Changelog: v0.1.1...v0.1.2

Tunix v0.1.1 — Improved Stability, New Features, and TPU Optimizations

08 Oct 01:58

Choose a tag to compare

This release focuses on improving performance and stability across TPU and Kaggle environments, introducing new utilities for agentic RL workflows, and adding broader model and configuration support. It also includes several important bug fixes and developer experience improvements.

Run Tunix on Kaggle TPU

We’re excited to announce that Tunix can now be launched directly in Kaggle notebooks with TPU acceleration — making it easier than ever to experiment, prototype, and run reinforcement learning workflows without complex setup.

Key highlights

First-class TPU support on Kaggle – run GRPO and other RL pipelines end-to-end in a Kaggle notebook.

Pre-configured runtime – no manual dependency juggling needed; version compatibility and performance tuning are handled automatically.

Launch the notebook here:
Knowledge Distillation Demo
QLoRA Demo
DPO Demo
GRPO Demo

New Features & Improvements

  • Model & Training Options
  • Added support for Gemma-3-270M model configuration.
  • Enabled setting default parameter dtype for Gemma-3 models.
  • Added remat options to models to improve memory efficiency.
  • Created a new list container type to support both Flax ≤0.11.2 and ≥0.12.0 versions.
  • Pathways & TPU Performance
  • Introduced experimental pre-sharding (experimental_reshard) for Pathways on Cloud TPU.
  • Improved weight synchronization logic to handle KV head duplication.
  • Disabled certain profiler options by default to improve stability on Pathways backend.
  • Configuration & CLI Improvements
  • Enabled generic creation of optax.optimizer and optax.learning_rate_schedule directly from CLI.
  • Relaxed JAX version constraints to ensure compatibility with Kaggle images.
  • Added minimum resource requirements for launch scripts in the README.
  • Documentation
  • Added ReadTheDocs link in README.
  • Expanded external notebooks with step-by-step guidance for long-running tasks.

Bug Fixes

  • Fixed a bug in reward function logic causing incorrect training signals.
  • Fixed a checkpoint handling issue where Colab failed to locate the final checkpoint and now cleans up intermediate directories.
  • Fixed Kaggle image performance issues.
  • Fixed type errors in agents/ modules.
  • Optimized masked index lookups using jnp.where for better runtime efficiency.
  • Resharded prompt and completion tokens to the REFERENCE mesh when rollout and reference models are distributed.

Dependency & Version Updates

  • JAX pinned to 0.7.1 and libtpu downgraded to resolve Cloud TPU performance regressions.
  • Relaxed JAX version requirement for Kaggle compatibility.

Full Changelog:

New Contributors

Full Changelog: v0.1.0...v0.1.1

Tunix v0.1.0 — First Public Release of Google’s Reinforcement Learning Library for LLM Post-Training

30 Sep 15:42

Choose a tag to compare

We’re thrilled to announce Tunix v0.1.0, the first public release of Google’s lightweight, JAX-native library for post-training large language models (LLMs) using both reinforcement learning (RL) and supervised fine-tuning (SFT). Tunix is built for researchers and production teams who want maximum control and scalability when aligning and improving foundation models — from data loading to distributed rollout and training on TPUs.

Highlights of v0.1.0

SFT (Supervised Fine-Tuning): Seamlessly train your LLMs with labeled datasets to bootstrap alignment before RL or as a standalone approach.

High-efficiency Reinforced Learning (RL) policies such as GRPO, GSPO, PPO, DPO, etc. designed for instruction-tuning and reward-based LLM alignment.

End-to-End RL Pipeline: From reward function definition to rollout and policy optimization, everything is fully integrated and composable.

Multi-Model Support: Works out of the box with leading open-weight models, including Gemma 2/3, LLaMA 3, and Qwen 2/3 — and can be extended to other Hugging Face models with minimal effort.

Seamless TPU / CPU Execution: Tunix is built on top of JAX and Flax with first-class support for multi-device and multi-host environments.

Dataset Flexibility: Use tensorflow datasets, Kaggle datasets, or custom Grain datasets with minimal changes.

Modular Design: Clean abstractions for samplers, reward functions, trainers, and optimizers — making it easy to extend or plug into your own workflows.

Get Started

Install Tunix from PyPI:

pip install google-tunix[prod]

We recommend starting with the GRPO demo notebook
to see how reinforcement learning can be applied to real LLM training.

Tunix 0.1.0.dev1 – Development Preview

30 Sep 07:03

Choose a tag to compare

This is the first development release of Tunix, Google’s reinforcement learning library for language model post-training.

Note: This is a pre-release (.dev1) version meant for testing and feedback.

APIs and behavior may change before the official 0.1.0 stable release.

Use this build to validate early integrations, experiment with new features, and provide feedback.

Install this dev release:

pip install --pre google-tunix[prod]==0.1.0.dev1

Tunix 0.1.0.dev0 – Development Preview

30 Sep 04:45

Choose a tag to compare

This is the first development release of Tunix, Google’s reinforcement learning library for language model post-training.

Note: This is a pre-release (.dev0) version meant for testing and feedback.

APIs and behavior may change before the official 0.1.0 stable release.

Use this build to validate early integrations, experiment with new features, and provide feedback.

Install this dev release:

pip install --pre google-tunix==0.1.0.dev0