Releases: mmuckley/torchkbnufft
Major Revision - Complex Number Support, Speed Improvements, Updated Documentation
This release includes a complete package rewrite, featuring complex tensor support, a 4-fold speed-up on the CPU, a 2-fold speed-up on the GPU, an updated API, and rewritten documentation. The release includes many backwards-compatibility-breaking changing, hence the version increment to 1.0.
A summary of changes follows:
- Support for PyTorch complex tensors. The user is now expected to pass in tensors of a shape
[batch_size, num_chans, height, width]
for a 2D imaging problem. It's sill possible to pass in real tensors - just use[batch_size, num_chans, height, width, 2]
. The backend uses complex values for efficiency. - A 4-fold speed-up on the CPU and a 2-fold speed-up on the GPU for table interpolation. The primary mechanism is process forking via
torch.jit.fork
- see interp.py for details. - The backend has been substantially rewritten to a higher code quality, adding type annotations and compiling performant-critical functions with
torch.jit.script
to get rid of the Python GIL. - A much improved density compensation function,
calc_density_compensation_function
, thanks to a contribution of @chaithyagr on the suggestion of @zaccharieramzi. - Simplified utility functions for
calc_toeplitz_kernel
andcalc_tensor_spmatrix
. - The documentation has been completely rewritten, upgrading to the Read the Docs template, an improved table of contents, adding mathematical descriptions of core operators, and having a dedicated basic usage section.
- Dedicated SENSE-NUFFT operators have been removed. Wrapping these with
torch.autograd.Function
didn't give us any benefits, so there's no need to have them. Users will now pass their sensitivity maps into theforward
function ofKbNufft
andKbNufftAdjoint
directly. - Rewritten notebooks and README files.
- New
CONTRIBUTING.md
. - Removed
mrisensesim.py
as it is not a core part of the package.
Small compatibility fixes
This fixes a few compatibility issues that could have possibly arisen in new versions of PyTorch mentioned in Issue #7. Specifically:
- A NumPy array was converted to a tensor without a copy - this has been modified to explicitly copy.
- A new
fft_compatibility.py
file to handle modifications to new versions oftorch.fft
(see here). Basically, use oftorch.fft
was going to be deprecated in the future in favor oftorch.fft.fft
. We now check the PyTorch version to make figure out which one to do to make sure the code can still run on older versions of PyTorch.
Documentation updates, 3D radial density compensation
This includes a few minor improvements that haven't been released to PyPI. This release also tests the new GitHub action that should automatically handle this.
- Documentation and package install for profiling (PR #3)
- 3D radial density compensation and stack of spirals density compensation (f9ac098c8f122026e8e8866828cb5957118a5679)
Code quality release
This release addressed a couple of code quality items, primarily:
- Removal of various
torch.clone
commands that are no longer necessary with the general lack of in-place operations. - Alterations to some list statements to reuse references and be more efficient.
Documenation Patch
Patch for documentation, spurious PyPI files.
Toeplitz NUFFT, code harmonization
This update adds the Toeplitz NUFFT, a new module that can execute a sequence of forward and backward NUFFTs as much as 80 times faster than a sequence of forward and adjoint NUFFT calls. The update also includes code harmonization and better use of inheritance.
Code harmonization
- Refactored interpolation code to reduce duplication across subroutines.
- Changed testing framework to use fixtures instead of globals.
- Added new
KbModule
class for__repr__
statements. All classes now inherit fromKbModule
, which reduces__repr__
duplication. - Minor performance improvements.
Toeplitz NUFFT
- Added
ToepNufft
andToepSenseNufft
classes. These can be used to execute forward/backward NUFFT operations with an FFT filter and no interpolations. The filter can be calculated using the newcalc_toep_kernel
function innufft.toep_functions
. - The new Toeplitz Nufft routine can be used to execute forward/backward NUFFTs as much as 80 times faster than a sequence of forward and backward NUFFT calls.
Support for alternate KB parameters
Updates to KB parameters
- Previously, using KB orders other than 0 and KB widths other than 2.34 were not possible - this is no longer the case.
Updates to initialization
- Initialization has been harmonized. The code should now cleanly determine whether the input a float or a tuple for parameters such as 'order' and then check input dimensions prior to building interpolation tables.
Printing models
__repr__
commands have been updated for all models, so it should be possible to print(ob) and get a brief summary of that object's attributes instead of a smorgasbord of PyTorch registered buffers. Examples of this have been added to the Jupyter Notebook examples.
Refactoring of nufft utilities
nufft_utils
has been moved into thenufft
submodule where it more belongs. Currentlytorchkbnufft
itself will import this and alias it to the oldnufft_utils
location, but eventually this will no longer be the case.
Speed upgrade and documentation harmonization
Speed updates
- Increased adjoint speed in normal operation mode by a factor of 2 (CPU) and 6 (GPU). This was primarily accomplished by replacing calls to
torch.bincount
withindex_add_
andindex_put_
. - Added a script for profiling on new systems as well as measured profiles for a workstation with a Xeon E5-1620 CPU and an Nvidia GTX 1080 GPU.
- Removed support for
matadj=True
andcoil_broadcast=True
. Coil broadcasting is now done by default. Matrix adjoints for normal operations are no longer necessary withindex_add_
andindex_put_
. Code using these operations will receive deprecation warnings.
Documentation updates
- Adjusted all docstrings to conform with Google style. This mostly involved removing variable names from the "Returns" fields.
- Created html-based API on readthedocs.io.
Testing updates
- Added pytest functions to verify performance across devices (currently CPU and CUDA). This will be necessary going forward as the adjoint operations use different parts of the PyTorch API.
- Added tests for backwards pass matching adjoint layers.
Bug fix
Bug Fix
- Fixed an issue related to
matadj
options in forward operations. Although forward ops don't usually need sparse matrix adjoints, they do when callingbackward()
.
New feature
- Added an option to pass in a set of sensitivity coils to the
forward
function inMriSenseNufft
that overwrites the coils that were included on initialization.
First release.
v0.1.0 Init.