feat: modular sync architecture with accorr numba/torch optimizations#250
feat: modular sync architecture with accorr numba/torch optimizations#250
Conversation
- Create hypyp/sync/ module with individual metric files: - plv.py, ccorr.py, accorr.py, coh.py, imaginary_coh.py - pli.py, wpli.py, envelope_corr.py, pow_corr.py - Add base.py with BaseMetric abstract class and helper functions - Add backend support (numpy default, numba/torch for future optimization) - Add get_metric() function for retrieving metrics by name - Update compute_sync() to delegate to sync module - Add deprecation warnings to old helper functions - All implementations verified identical to original code
- Simplify packages config in pyproject.toml (Poetry auto-includes subpackages) - Re-execute tutorial notebooks to refresh outputs
… API Integrate accorr optimizations (numba JIT, PyTorch GPU) from PR #246 into the modular sync architecture. Unify the API around a single `optimization` parameter (None, 'auto', 'numba', 'torch') with graceful fallback and warnings when backends are unavailable. - BaseMetric: add _resolve_optimization() with fallback cascade - ACCorr: numpy/numba/torch backends with precompute optimization - All metrics: remove dead dispatch code for numpy-only metrics - compute_sync: pass optimization directly to get_metric() - Tests: reference-based validation for all backends, mocked fallbacks - Add optional dependency groups (optim_torch, optim_numba) Co-Authored-By: Martín A. Miguel <[email protected]>
|
Hi @m2march, This PR integrates your accorr optimizations (numba/torch) from PR #246 into the new modular sync architecture. Your work is credited via co-authorship on the commit and in the accorr.py docstring. Main changes from your original PR:
Could you review the changes before we merge? |
|
I've got one comment and then some less relevant notes: Comment
Notes
|
- Fix TestAccorrViaComputeSync: all tests now call compute_sync() instead of ACCorr.compute() directly, matching the class intent (integration tests) - Add complex_signal_raw fixture to conftest for compute_sync format (2, n_epoch, n_ch, n_freq, n_samp) - Add torch/numba variants in TestAccorrViaComputeSync (skipped when unavailable) - Document device priority (MPS > CUDA > CPU) and fallback cascade in base.py - Add Modes section + See Also block to compute_sync() docstring in analyses.py - Create hypyp/sync/README.md: full metrics reference with formulas and citations - Credit @m2march in README.md Contributors, CONTRIBUTORS.md, and accorr.py Credits docstring Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
- coh: replace incorrectly cited Nolte 2004 (about ImCoh) with the actual reference used in Coh class: Nunez & Srinivasan (2006) - ccorr: replace Berens 2009 (MATLAB toolbox) with the original statistical reference used in CCorr class: Fisher (1995) Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
|
Thanks for the review @m2march! All three points addressed in branch 1. Test bug (line 89) — Fixed
2. Device priority — Documented
3. Math documentation — Three layersSince this is a scientific tool and you can never have too much documentation:
Acknowledgement@m2march (Martín A. Miguel) is now credited in Test results with all backends: 14/14 passed (numpy ✓, numba ✓, torch/MPS ✓) |
fix(pr250-review): address m2march's code review feedback
Combine Optional (from sync modular branch) and Fraction (from master) in the typing imports. Co-Authored-By: Claude Opus 4.6 <[email protected]>
Summary
BaseMetric, in its own file underhypyp/sync/optimizationAPI: Single parameter (None,'auto','numba','torch') flows through the entire chain:compute_sync()→get_metric()→ACCorr()→ auto-detects GPU with graceful fallback and warningsOptimization behavior
None'auto''numba''torch'Usage
Optional dependencies
Test plan
compute_sync()test_stats,test_fnirs, etc.) unaffected🤖 Generated with Claude Code
Co-Authored-By: Martín A. Miguel [email protected]