Skip to content
This repository was archived by the owner on Jan 27, 2025. It is now read-only.

Commit efc305e

Browse files
authored
Merge branch 'main' into enh/bold-registration-experiment
2 parents 73337c0 + 3cba47d commit efc305e

File tree

6 files changed

+541
-26
lines changed

6 files changed

+541
-26
lines changed

README.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,9 +47,13 @@ More recently, Cieslak et al. [#r3]_ integrated both approaches in *SHORELine*,
4747
the work of ``eddy`` and *SHORELine*, while generalizing these methods to multiple acquisition schemes
4848
(single-shell, multi-shell, and diffusion spectrum imaging) using diffusion models available with DIPY [#r5]_.
4949

50+
.. BEGIN FLOWCHART
51+
5052
.. image:: https://raw.githubusercontent.com/nipreps/eddymotion/507fc9bab86696d5330fd6a86c3870968243aea8/docs/_static/eddymotion-flowchart.svg
5153
:alt: The eddymotion flowchart
5254

55+
.. END FLOWCHART
56+
5357
.. [#r1] S. Ben-Amitay et al., Motion correction and registration of high b-value diffusion weighted images, Magnetic
5458
Resonance in Medicine 67:1694–1702 (2012)
5559
.. [#r2] J. L. R. Andersson. et al., An integrated approach to correction for off-resonance effects and subject movement

docs/conf.py

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -234,6 +234,14 @@
234234
apidoc_separate_modules = True
235235
apidoc_extra_args = ["--module-first", "-d 1", "-T"]
236236

237+
238+
# -- Options for autodoc extension -------------------------------------------
239+
autodoc_default_options = {
240+
"special-members": "__call__, __len__",
241+
}
242+
autoclass_content = "both"
243+
244+
237245
# -- Options for intersphinx extension ---------------------------------------
238246

239247
# Example configuration for intersphinx: refer to the Python standard library.
@@ -253,3 +261,25 @@
253261

254262
# -- Options for versioning extension ----------------------------------------
255263
scv_show_banner = True
264+
265+
266+
# -- Special functions -------------------------------------------------------
267+
import inspect
268+
269+
270+
def autodoc_process_signature(app, what, name, obj, options, signature, return_annotation):
271+
"""Replace the class signature by the signature from cls.__init__"""
272+
273+
if what == "class" and hasattr(obj, "__init__"):
274+
try:
275+
init_signature = inspect.signature(obj.__init__)
276+
# Convert the Signature object to a string
277+
return str(init_signature), return_annotation
278+
except ValueError:
279+
# Handle cases where `inspect.signature` fails
280+
return signature, return_annotation
281+
return signature, return_annotation
282+
283+
284+
def setup(app):
285+
app.connect("autodoc-process-signature", autodoc_process_signature)

docs/index.rst

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,8 @@
11
.. include:: links.rst
22
.. include:: ../README.rst
3-
:end-line: 29
3+
:end-before: BEGIN FLOWCHART
44
.. include:: ../README.rst
5-
:start-line: 34
6-
5+
:start-after: END FLOWCHART
76

87
.. image:: _static/eddymotion-flowchart.svg
98
:alt: The eddymotion flowchart

docs/installation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Installation
55
============
66
Make sure all of *eddymotion*' `External Dependencies`_ are installed.
77

8-
On a functional Python 3.7 (or above) environment with ``pip`` installed,
8+
On a functional Python 3.10 (or above) environment with ``pip`` installed,
99
*eddymotion* can be installed using the habitual command ::
1010

1111
$ python -m pip install eddymotion

docs/notebooks/dwi_gp_estimation.ipynb

Lines changed: 485 additions & 0 deletions
Large diffs are not rendered by default.

src/eddymotion/model/gpr.py

Lines changed: 19 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@
6464

6565
class EddyMotionGPR(GaussianProcessRegressor):
6666
r"""
67-
A GP regressor specialized for eddymotion.
67+
A Gaussian process (GP) regressor specialized for eddymotion.
6868
6969
This specialization of the default GP regressor is created to allow
7070
the following extended behaviors:
@@ -80,22 +80,21 @@ class EddyMotionGPR(GaussianProcessRegressor):
8080
8181
In principle, Scikit-Learn's implementation normalizes the training data
8282
as in [Andersson15]_ (see
83-
`FSL's souce code <https://git.fmrib.ox.ac.uk/fsl/eddy/-/blob/2480dda293d4cec83014454db3a193b87921f6b0/DiffusionGP.cpp#L218>`__).
83+
`FSL's source code <https://git.fmrib.ox.ac.uk/fsl/eddy/-/blob/2480dda293d4cec83014454db3a193b87921f6b0/DiffusionGP.cpp#L218>`__).
8484
From their paper (p. 167, end of first column):
8585
86-
Typically one just substracts the mean (:math:`\bar{\mathbf{f}}`)
86+
Typically one just subtracts the mean (:math:`\bar{\mathbf{f}}`)
8787
from :math:`\mathbf{f}` and then add it back to
8888
:math:`f^{*}`, which is analogous to what is often done in
8989
"traditional" regression.
9090
9191
Finally, the parameter :math:`\sigma^2` maps on to Scikit-learn's ``alpha``
92-
of the regressor.
93-
Because it is not a parameter of the kernel, hyperparameter selection
94-
through gradient-descent with analytical gradient calculations
95-
would not work (the derivative of the kernel w.r.t. alpha is zero).
92+
of the regressor. Because it is not a parameter of the kernel, hyperparameter
93+
selection through gradient-descent with analytical gradient calculations
94+
would not work (the derivative of the kernel w.r.t. ``alpha`` is zero).
9695
97-
I believe this is overlooked in [Andersson15]_, or they actually did not
98-
use analytical gradient-descent:
96+
This might have been overlooked in [Andersson15]_, or else they actually did
97+
not use analytical gradient-descent:
9998
10099
*A note on optimisation*
101100
@@ -266,7 +265,6 @@ def __init__(
266265
l_bounds: tuple[float, float] = BOUNDS_LAMBDA,
267266
):
268267
r"""
269-
Initialize an exponential Kriging kernel.
270268
271269
Parameters
272270
----------
@@ -275,7 +273,7 @@ def __init__(
275273
beta_l : :obj:`float`, optional
276274
The :math:`\lambda` hyperparameter.
277275
a_bounds : :obj:`tuple`, optional
278-
Bounds for the a parameter.
276+
Bounds for the ``a`` parameter.
279277
l_bounds : :obj:`tuple`, optional
280278
Bounds for the :math:`\lambda` hyperparameter.
281279
@@ -290,7 +288,7 @@ def hyperparameter_a(self) -> Hyperparameter:
290288
return Hyperparameter("beta_a", "numeric", self.a_bounds)
291289

292290
@property
293-
def hyperparameter_beta_l(self) -> Hyperparameter:
291+
def hyperparameter_l(self) -> Hyperparameter:
294292
return Hyperparameter("beta_l", "numeric", self.l_bounds)
295293

296294
def __call__(
@@ -312,10 +310,10 @@ def __call__(
312310
313311
Returns
314312
-------
315-
K : ndarray of shape (n_samples_X, n_samples_Y)
313+
K : :obj:`~numpy.ndarray` of shape (n_samples_X, n_samples_Y)
316314
Kernel k(X, Y)
317315
318-
K_gradient : ndarray of shape (n_samples_X, n_samples_X, n_dims),\
316+
K_gradient : :obj:`~numpy.ndarray` of shape (n_samples_X, n_samples_X, n_dims),\
319317
optional
320318
The gradient of the kernel k(X, X) with respect to the log of the
321319
hyperparameter of the kernel. Only returned when `eval_gradient`
@@ -343,12 +341,12 @@ def diag(self, X: np.ndarray) -> np.ndarray:
343341
344342
Parameters
345343
----------
346-
X : ndarray of shape (n_samples_X, n_features)
344+
X : :obj:`~numpy.ndarray` of shape (n_samples_X, n_features)
347345
Left argument of the returned kernel k(X, Y)
348346
349347
Returns
350348
-------
351-
K_diag : ndarray of shape (n_samples_X,)
349+
K_diag : :obj:`~numpy.ndarray` of shape (n_samples_X,)
352350
Diagonal of kernel k(X, X)
353351
"""
354352
return self.beta_l * np.ones(X.shape[0])
@@ -372,7 +370,6 @@ def __init__(
372370
l_bounds: tuple[float, float] = BOUNDS_LAMBDA,
373371
):
374372
r"""
375-
Initialize a spherical Kriging kernel.
376373
377374
Parameters
378375
----------
@@ -396,7 +393,7 @@ def hyperparameter_a(self) -> Hyperparameter:
396393
return Hyperparameter("beta_a", "numeric", self.a_bounds)
397394

398395
@property
399-
def hyperparameter_beta_l(self) -> Hyperparameter:
396+
def hyperparameter_l(self) -> Hyperparameter:
400397
return Hyperparameter("beta_l", "numeric", self.l_bounds)
401398

402399
def __call__(
@@ -418,10 +415,10 @@ def __call__(
418415
419416
Returns
420417
-------
421-
K : ndarray of shape (n_samples_X, n_samples_Y)
418+
K : :obj:`~numpy.ndarray` of shape (n_samples_X, n_samples_Y)
422419
Kernel k(X, Y)
423420
424-
K_gradient : ndarray of shape (n_samples_X, n_samples_X, n_dims),\
421+
K_gradient : :obj:`~numpy.ndarray` of shape (n_samples_X, n_samples_X, n_dims),\
425422
optional
426423
The gradient of the kernel k(X, X) with respect to the log of the
427424
hyperparameter of the kernel. Only returned when ``eval_gradient``
@@ -454,12 +451,12 @@ def diag(self, X: np.ndarray) -> np.ndarray:
454451
455452
Parameters
456453
----------
457-
X : ndarray of shape (n_samples_X, n_features)
454+
X : :obj:`~numpy.ndarray` of shape (n_samples_X, n_features)
458455
Left argument of the returned kernel k(X, Y)
459456
460457
Returns
461458
-------
462-
K_diag : ndarray of shape (n_samples_X,)
459+
K_diag : :obj:`~numpy.ndarray` of shape (n_samples_X,)
463460
Diagonal of kernel k(X, X)
464461
"""
465462
return self.beta_l * np.ones(X.shape[0])

0 commit comments

Comments
 (0)