Skip to content

Conversation

@alejoe91
Copy link
Member

This PR includes a major refactor of the metrics concept.

It defines a BaseMetric, with core metadata of individual metrics including dtypes, column names, extension dependance, and a compute function.
Another BaseMetricExtension contains a collection of BaseMetrics and deals with most of the machinery, including:

  • setting params
  • checking dependencies and removing metrics based on it
  • computing metrics
  • deleting, merging, splitting metrics
  • preparing data that can be shared across metrics (e.g., pca for pca metrics, peaks info and templates for template metrics)

The template_metrics, quality_metrics, and a new spiketrain_metrics extensions are now in the metrics module. The latter only includes num_spikes and firing_rate, which are also imported as quality metrics.

Still finalizing tests, but this should be 90% done

@alejoe91 alejoe91 added the qualitymetrics Related to qualitymetrics module label Oct 22, 2025
metrics = pd.DataFrame(index=all_unit_ids, columns=old_metrics.columns)

metrics.loc[not_new_ids, :] = old_metrics.loc[not_new_ids, :]
metrics.loc[new_unit_ids_f, :] = self._compute_metrics(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello, this is a new thing. It'd be great to check if we can compute this before we try to do it, for the following situation:

Suppose you originally compute a metric using spikeinterface version 103 (or some fork that you've made yourself... ahem).
Then you open your analyzer in si-gui using version 102. There was a new metric introduced in 103, which 102 doesn't know about. When you try to merge, it errors because it can't compute the new metric. So you do any merging at all due to the inability to merge one metric.
Or you no longer have the recording when you open, so you can't compute sd_ratio or something....

Instead, I'd like to warn if we can't compute and stick in anan. We could could do that here by checking that metric_names are in self.metric_list.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I think I meant to write this at line 1207 about the merging step, but also applies to splits!

continue
self.data[ext_data_name] = ext_data

self.set_data(ext_data_name, ext_data)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just use _set_data directly?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, then we can remove set_data. I followed this design tgat we have for other functions but it might be an overkill

# this load in memmory
ext_data = np.array(ext_data_)
self.data[ext_data_name] = ext_data
self.set_data(ext_data_name, ext_data)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

repeat: Just use _set_data directly?

if unit_ids is None:
unit_ids = sorting_analyzer.unit_ids

_has_required_extensions(sorting_analyzer, metric_name="snr")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These checks were done in case people use e.g. compute_snrs directly, and it still gives them a good error message.

ComputeQualityMetrics,
compute_quality_metrics,
)

Copy link
Member

@chrishalcrow chrishalcrow Oct 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We used to import all the compute functions, like compute_snrs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that users can still access the individual functions with from spikeinterface.metrics.quality.misc_metrics import ...

We can make sure we highlight this in the release notes, but I would avoid clogging the import with all the low-level functions

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be more careful, because our current docs (e.g. https://spikeinterface.readthedocs.io/en/stable/modules/qualitymetrics/firing_range.html) has lots of code which imports the functions directly from spikeinterface.quality_metrics. So I reckon this could break a lot of code.

@@ -1,9 +1,3 @@
from .template_metrics import (
ComputeTemplateMetrics,
compute_template_metrics,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should deprecate compute_template_metrics, rather than remove it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not removed, just deprecated if you import it from spikeinterface.postprocessing instead of spikeinterface.metrics

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from spikeinterface.postprocessing import compute_template_metrics gives:

ImportError: cannot import name 'compute_template_metrics' from 'spikeinterface.postprocessing'

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, we get rid of the class and keep a deprecated fake function in template_metrics

@@ -1,9 +1,10 @@
from .quality_metric_list import *
from .quality_metric_calculator import (
compute_quality_metrics,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also think we should deprecate compute_quality_metrics from qualitymetrics

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

@chrishalcrow
Copy link
Member

This looks great - love the prepare_data idea! And this refactor will make it less awkward to develop new metrics. And the pca file is much neater now - nice.

I think this is a good chance to remove compute_{metric_name} stuff from the docs (modules/qualitymetrics) and replace with analyzer.compute("quality_metrics", metric_names={metric_name}) as our recommended method. More awkward, but much better for provenance etc.

I'd vote to take the chance to make multi channel template metrics included by default: they're very helpful.

@alejoe91
Copy link
Member Author

I'd vote to take the chance to make multi channel template metrics included by default: they're very helpful.

I agree! Maybe we can make it default for number of channel > 64?

22/04/2020
"""

from __future__ import annotations
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

keep this file and deprecate

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

qualitymetrics Related to qualitymetrics module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants