Skip to content

Releases: mmuckley/torchkbnufft

Major Revision - Complex Number Support, Speed Improvements, Updated Documentation

27 Jan 20:48
be0cd5b
Compare
Choose a tag to compare

This release includes a complete package rewrite, featuring complex tensor support, a 4-fold speed-up on the CPU, a 2-fold speed-up on the GPU, an updated API, and rewritten documentation. The release includes many backwards-compatibility-breaking changing, hence the version increment to 1.0.

A summary of changes follows:

  • Support for PyTorch complex tensors. The user is now expected to pass in tensors of a shape [batch_size, num_chans, height, width] for a 2D imaging problem. It's sill possible to pass in real tensors - just use [batch_size, num_chans, height, width, 2]. The backend uses complex values for efficiency.
  • A 4-fold speed-up on the CPU and a 2-fold speed-up on the GPU for table interpolation. The primary mechanism is process forking via torch.jit.fork - see interp.py for details.
  • The backend has been substantially rewritten to a higher code quality, adding type annotations and compiling performant-critical functions with torch.jit.script to get rid of the Python GIL.
  • A much improved density compensation function, calc_density_compensation_function, thanks to a contribution of @chaithyagr on the suggestion of @zaccharieramzi.
  • Simplified utility functions for calc_toeplitz_kernel and calc_tensor_spmatrix.
  • The documentation has been completely rewritten, upgrading to the Read the Docs template, an improved table of contents, adding mathematical descriptions of core operators, and having a dedicated basic usage section.
  • Dedicated SENSE-NUFFT operators have been removed. Wrapping these with torch.autograd.Function didn't give us any benefits, so there's no need to have them. Users will now pass their sensitivity maps into the forward function of KbNufft and KbNufftAdjoint directly.
  • Rewritten notebooks and README files.
  • New CONTRIBUTING.md.
  • Removed mrisensesim.py as it is not a core part of the package.

Small compatibility fixes

14 Jan 18:29
568a859
Compare
Choose a tag to compare

This fixes a few compatibility issues that could have possibly arisen in new versions of PyTorch mentioned in Issue #7. Specifically:

  • A NumPy array was converted to a tensor without a copy - this has been modified to explicitly copy.
  • A new fft_compatibility.py file to handle modifications to new versions of torch.fft (see here). Basically, use of torch.fft was going to be deprecated in the future in favor of torch.fft.fft. We now check the PyTorch version to make figure out which one to do to make sure the code can still run on older versions of PyTorch.

Documentation updates, 3D radial density compensation

14 Jan 00:37
Compare
Choose a tag to compare

This includes a few minor improvements that haven't been released to PyPI. This release also tests the new GitHub action that should automatically handle this.

Code quality release

02 Mar 18:01
Compare
Choose a tag to compare

This release addressed a couple of code quality items, primarily:

  • Removal of various torch.clone commands that are no longer necessary with the general lack of in-place operations.
  • Alterations to some list statements to reuse references and be more efficient.

Documenation Patch

08 Jan 15:03
Compare
Choose a tag to compare

Patch for documentation, spurious PyPI files.

Toeplitz NUFFT, code harmonization

08 Jan 14:46
Compare
Choose a tag to compare

This update adds the Toeplitz NUFFT, a new module that can execute a sequence of forward and backward NUFFTs as much as 80 times faster than a sequence of forward and adjoint NUFFT calls. The update also includes code harmonization and better use of inheritance.

Code harmonization

  • Refactored interpolation code to reduce duplication across subroutines.
  • Changed testing framework to use fixtures instead of globals.
  • Added new KbModule class for __repr__ statements. All classes now inherit from KbModule, which reduces __repr__ duplication.
  • Minor performance improvements.

Toeplitz NUFFT

  • Added ToepNufft and ToepSenseNufft classes. These can be used to execute forward/backward NUFFT operations with an FFT filter and no interpolations. The filter can be calculated using the new calc_toep_kernel function in nufft.toep_functions.
  • The new Toeplitz Nufft routine can be used to execute forward/backward NUFFTs as much as 80 times faster than a sequence of forward and backward NUFFT calls.

Support for alternate KB parameters

25 Nov 14:50
Compare
Choose a tag to compare
Pre-release

Updates to KB parameters

  • Previously, using KB orders other than 0 and KB widths other than 2.34 were not possible - this is no longer the case.

Updates to initialization

  • Initialization has been harmonized. The code should now cleanly determine whether the input a float or a tuple for parameters such as 'order' and then check input dimensions prior to building interpolation tables.

Printing models

  • __repr__ commands have been updated for all models, so it should be possible to print(ob) and get a brief summary of that object's attributes instead of a smorgasbord of PyTorch registered buffers. Examples of this have been added to the Jupyter Notebook examples.

Refactoring of nufft utilities

  • nufft_utils has been moved into the nufft submodule where it more belongs. Currently torchkbnufft itself will import this and alias it to the old nufft_utils location, but eventually this will no longer be the case.

Speed upgrade and documentation harmonization

21 Nov 20:45
Compare
Choose a tag to compare

Speed updates

  • Increased adjoint speed in normal operation mode by a factor of 2 (CPU) and 6 (GPU). This was primarily accomplished by replacing calls to torch.bincount with index_add_ and index_put_.
  • Added a script for profiling on new systems as well as measured profiles for a workstation with a Xeon E5-1620 CPU and an Nvidia GTX 1080 GPU.
  • Removed support for matadj=True and coil_broadcast=True. Coil broadcasting is now done by default. Matrix adjoints for normal operations are no longer necessary with index_add_ and index_put_. Code using these operations will receive deprecation warnings.

Documentation updates

  • Adjusted all docstrings to conform with Google style. This mostly involved removing variable names from the "Returns" fields.
  • Created html-based API on readthedocs.io.

Testing updates

  • Added pytest functions to verify performance across devices (currently CPU and CUDA). This will be necessary going forward as the adjoint operations use different parts of the PyTorch API.
  • Added tests for backwards pass matching adjoint layers.

Bug fix

15 Nov 16:23
Compare
Choose a tag to compare
Bug fix Pre-release
Pre-release

Bug Fix

  • Fixed an issue related to matadj options in forward operations. Although forward ops don't usually need sparse matrix adjoints, they do when calling backward().

New feature

  • Added an option to pass in a set of sensitivity coils to the forward function in MriSenseNufft that overwrites the coils that were included on initialization.

First release.

11 Nov 21:56
Compare
Choose a tag to compare
First release. Pre-release
Pre-release
v0.1.0

Init.