You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As I started to execute on the original small module design proposal #1450 I realized each method didn't fit quite as nicely as I'd hoped. The outputs of each spot finding method were two varied and it would have caused too much confusion around the difference between a located spot and a spot trace. After some more speculation I've been coming around to the idea that what spot finding needs is not smaller modules but instead more abstraction. This proposal represents the "more abstraction" approach:
starfish.spots would include the following two packages:
FindSpots will handle
-locating and measuring spots in every r/ch pair
DecodeSpots will handle
-building spot traces
-decoding the spots traces
To support each step the following two new data models are required:
SpotFindingResults : a dictionary of round/ch indices and their corresponding SpotAttribtues
DecodedIntensityTable : aka decoded spot traces. A representation of spots, their attributes and their corresponding traces, for mulitplexed methods this trace is a vector. For non multiplexed methods the trace is a singe value. Each spot also has a decoded target value
With this refactor we are abstracting the idea of an IntensityTable from the user. Instead spot traces will be built when needed as part of the decoding step. From a user perspective they will go straight from finding spots to decoding them.
The new workflows for each notebook would be:
osmFISH*
osmFish currently uses a LocalMaxPeakFinder with no reference image. It's new workflow would be:
BaristaSeq
Uses PixelSpotDecoder. I think trying to rework PixelSpotDecoding into this modal is too much work for it's worth. But I do think that explaining it as just a Decoder that goes directly from an ImageStack to DecdodedIntensityTable will be clearer then what we currently have.
starMap
Starmap currently uses LocalSearchBlobDetector. It's new workflow would be:
We'll actually be able to delete LocalSearchBlobDetector after this refactor and instead use a combination of regular BlobDetector and trace building using a 'nearest_neighbors' method.
Create FindSpots module with contract ImageStack -> SpotFindingResults
Create DecodeSpots module with contract SpotFindingResults -> DecodedIntensityTable
Fix provenance logging with new modules
Move measure_spot_intensity_code to a utility file in the FindSpots module
Move existing decoders to DecodeSpots module.
Refactor existing spot finding methods to always take in an imagestack and use and output SpotFindingResults (I'll probably first just make copies of each one under the new FindSpots module so I don't break everything)
Refactor existing notebooks to use this new framework
Delete the DetectSpots package.
By abstracting the step of building spot traces to decoding we allow for more flexibility in the data structures and process used. For example Gabriel's graph based approach #1482
For SeqFish the workflow would be LocateSpots.LocalSearchBlob() to get the spot locations. And then a new Decoder that would handle creating the possible trace groupings and decoding.
Spot Finding
As I started to execute on the original small module design proposal #1450 I realized each method didn't fit quite as nicely as I'd hoped. The outputs of each spot finding method were two varied and it would have caused too much confusion around the difference between a located spot and a spot trace. After some more speculation I've been coming around to the idea that what spot finding needs is not smaller modules but instead more abstraction. This proposal represents the "more abstraction" approach:
starfish.spots would include the following two packages:
Their base contracts would be:
ImageStack -> FindSpots -> SpotFindingResults
codebook,
SpotFindingResults -> DecodeSpots -> DecodedIntensityTable
FindSpots will handle
-locating and measuring spots in every r/ch pair
DecodeSpots will handle
-building spot traces
-decoding the spots traces
To support each step the following two new data models are required:
SpotFindingResults : a dictionary of round/ch indices and their corresponding SpotAttribtues
DecodedIntensityTable : aka decoded spot traces. A representation of spots, their attributes and their corresponding traces, for mulitplexed methods this trace is a vector. For non multiplexed methods the trace is a singe value. Each spot also has a decoded target value
With this refactor we are abstracting the idea of an IntensityTable from the user. Instead spot traces will be built when needed as part of the decoding step. From a user perspective they will go straight from finding spots to decoding them.
The new workflows for each notebook would be:
osmFISH*
osmFish currently uses a LocalMaxPeakFinder with no reference image. It's new workflow would be:
ISS
ISS currently uses a blob detector on a blobs images. It's new workflow would be:
BaristaSeq
Uses PixelSpotDecoder. I think trying to rework PixelSpotDecoding into this modal is too much work for it's worth. But I do think that explaining it as just a Decoder that goes directly from an ImageStack to DecdodedIntensityTable will be clearer then what we currently have.
starMap
Starmap currently uses LocalSearchBlobDetector. It's new workflow would be:
We'll actually be able to delete LocalSearchBlobDetector after this refactor and instead use a combination of regular BlobDetector and trace building using a 'nearest_neighbors' method.
smFISH
smFish currently used TrackpyLocalMaxPeakFinder with no reference image, it's new workflow would be:
The high level plan of attack for the refactor is:
By abstracting the step of building spot traces to decoding we allow for more flexibility in the data structures and process used. For example Gabriel's graph based approach #1482
For SeqFish the workflow would be LocateSpots.LocalSearchBlob() to get the spot locations. And then a new Decoder that would handle creating the possible trace groupings and decoding.