Description:
Optimize the accorr (adjusted circular correlation) metric computation, which is currently a performance bottleneck.
Context:
The accorr metric in analyses.py is slow for large datasets. Users have reported long computation times that limit practical use. We need to explore optimization strategies.
Current implementation location: hypyp/analyses.py
Optimization strategies to explore:
-
Pre-computing
-
Parallelization
-
Algorithm optimization
Tasks:
Acceptance Criteria:
Benchmark template:
import time
import numpy as np
from hypyp import analyses
# Generate test data of varying sizes
sizes = [(10, 32, 1000), (50, 64, 2000), (100, 64, 5000)]
for n_epochs, n_channels, n_samples in sizes:
data = generate_test_data(n_epochs, n_channels, n_samples)
start = time.time()
result = analyses.compute_sync(data, metric='accorr')
elapsed = time.time() - start
print(f"Size {n_epochs}x{n_channels}x{n_samples}: {elapsed:.2f}s")
Description:
Optimize the
accorr(adjusted circular correlation) metric computation, which is currently a performance bottleneck.Context:
The
accorrmetric inanalyses.pyis slow for large datasets. Users have reported long computation times that limit practical use. We need to explore optimization strategies.Current implementation location:
hypyp/analyses.pyOptimization strategies to explore:
Pre-computing
Parallelization
joblibfor parallel computation across epochs/channelsnumbaJIT compilation for inner loopsnumpyvectorization improvementsAlgorithm optimization
pycircstat,astropy.stats)Tasks:
accorrimplementation (usecProfileorline_profiler)Acceptance Criteria:
Benchmark template: