Skip to content

Releases: tracel-ai/burn

v0.21.0-pre.4

27 Apr 20:49
8e485cb

Choose a tag to compare

v0.21.0-pre.4 Pre-release
Pre-release

What's Changed

Full Changelog: v0.21.0-pre.2...v0.21.0-pre.3

v0.21.0-pre.3

08 Apr 20:11

Choose a tag to compare

v0.21.0-pre.3 Pre-release
Pre-release

What's Changed

Full Changelog: v0.21.0-pre.2...v0.21.0-pre.3

v0.21.0-pre.2

02 Mar 18:44

Choose a tag to compare

v0.21.0-pre.2 Pre-release
Pre-release

What's Changed

Full Changelog: v0.21.0-pre.1...v0.21.0-pre.2

v0.21.0-pre.1

09 Feb 22:13
3fa8dfa

Choose a tag to compare

v0.21.0-pre.1 Pre-release
Pre-release

What's Changed

v0.20.1

23 Jan 17:43

Choose a tag to compare

Bug Fixes & Improvement

v0.20.0

15 Jan 16:08

Choose a tag to compare

Summary

This release marks a major turning point for the ecosystem with the introduction of CubeK. Our goal was to solve a classic challenge in deep learning: achieving peak performance on diverse hardware without maintaining fragmented codebases.

By unifying CPU and GPU kernels through CubeCL, we've managed to squeeze maximum efficiency out of everything from NVIDIA Blackwell GPUs to standard consumer CPUs.

Beyond performance, this release makes the library more robust, flexible, and significantly easier to debug.

This release also features a complete overhaul of the ONNX import system, providing broader support for a wide range of ONNX models. In addition, various bug fixes and new tensor operations enhance stability and usability.

For more details, check out the release post on our website.

Changelog

Breaking

We've introduced a couple of breaking API changes with this release. The affected interfaces are detailed in the sections below.

Training

We refactored burn-train to better support different abstractions and custom training strategies. As part of this,
the LearnerBuilder has been replaced by the LearningParadigm flow:

- let learner = LearnerBuilder::new(ARTIFACT_DIR)
+ let training = SupervisedTraining::new(ARTIFACT_DIR, dataloader_train, dataloader_valid)
        .metrics((AccuracyMetric::new(), LossMetric::new()))
        .num_epochs(config.num_epochs)
-       .learning_strategy(burn::train::LearningStrategy::SingleDevice(device))
-       .build(model, config.optimizer.init(), lr_scheduler.init().unwrap());
+       .summary();
 
- let result = learner.fit(dataloader_train, dataloader_valid);
+ let result = training.launch(Learner::new(
+      model,
+      config.optimizer.init(),
+      lr_scheduler.init().unwrap(),
+ ));

Interface Changes

The scatter and select_assign operations now require an IndexingUpdateOp to specify the update behavior.

- let output = tensor.scatter(0, indices, values);
+ let output = tensor.scatter(0, indices, values, IndexingUpdateOp::Add);

API calls for slice, slice_assign, and slice_fill no longer require const generics for dimensions, which cleans up the syntax quite a bit:

- let prev_slice = tensor.slice::<[Range<usize>; D]>(slices.try_into().unwrap());
+ let prev_slice = tensor.slice(slices.as_slice());

The grid_sample_2d operation now supports different options.
To preserve the previous behavior, make sure to specify the matching options:

- let output = tensor.grid_sample_2d(grid, InterpolateMode::Bilinear);
+ let options = GridSampleOptions::new(InterpolateMode::Bilinear)
+     .with_padding_mode(GridSamplePaddingMode::Border)
+     .with_align_corners(true);
+ let output = tensor.grid_sample_2d(grid, options);

The QuantStore variants used in QuantScheme have been updated to support a packing dimension.

  pub enum QuantStore {
      /// Native quantization doesn't require packing and unpacking.
      Native,
+     /// Store packed quantized values in a natively supported packing format (i.e. e2m1x2).
+     PackedNative(usize),
      /// Store packed quantized values in a 4-byte unsigned integer.
-     U32,
+     PackedU32(usize),
 }

Finally, Shape no longer implements IntoIterator. If you need to iterate by-value over dimensions, access the dims field directly.

- for s in shape {
+ for s in shape.dims {

Module & Tensor

Datasets & Training

Backends

Bug Fixes

Documentation & Examples

Fixes

ONNX Support

Enhancements

Refactoring

  • chore: Update to batch caching PR for cubecl (#3948) @wingertge
  • Refactor IR to define outputs as a function of the operation (#3877) ...
Read more

v0.20.0-pre.6

18 Dec 21:27
91dd62c

Choose a tag to compare

v0.20.0-pre.6 Pre-release
Pre-release

What's Changed

v0.20.0-pre.5

08 Dec 14:53
42edc63

Choose a tag to compare

v0.20.0-pre.5 Pre-release
Pre-release

What's Changed

v0.20.0-pre.4

01 Dec 19:15

Choose a tag to compare

v0.20.0-pre.4 Pre-release
Pre-release

What's Changed

v0.20.0-pre.3

24 Nov 17:37
88d662d

Choose a tag to compare

v0.20.0-pre.3 Pre-release
Pre-release

What's Changed