Add Lo star tracker packet processing#1864
Add Lo star tracker packet processing#1864greglucas merged 6 commits intoIMAP-Science-Operations-Center:devfrom
Conversation
| LoAPID.ILO_SCI_DE, | ||
| LoAPID.ILO_STAR, | ||
| ] | ||
| apids_with_data = [apid for apid in good_apids if apid in datasets_by_apid] |
There was a problem hiding this comment.
I reorganized this a bit because I think it would have failed if you had a packet file without some apids. We only want to look at the apids that were included in both of your lists, good and present.
| / "tests/lo/validation_data" | ||
| / "Instrument_FM1_T104_R129_20240803_ILO_STAR_EU_trimmed.csv" | ||
| ) | ||
| validation_arr = np.loadtxt(validation_file, delimiter=",", skiprows=1, dtype=int) |
There was a problem hiding this comment.
This is easier than pandas read_csv because there are so many columns, so just subset the fields ourselves instead.
This creates an l1a unprocessed and l1b processed star sensor data products.
|
The last two commits refactored this a bit and separated the values out into two datasets now, an l1a and l1b. I think it makes sense to do both of these at the same time since it is such a minor change from l1a to l1b so I don't think we need to create a separate processing job for it. The l1b dataset is currently called |
@greglucas I believe the L1A star data should be decompressed. We are decompressing the other data products in L1A as well. But yes, the plan was to use prostar for l1b |
| # This assumes that all data_compressed entries are of the same length | ||
| # but allows for only one frombuffer call | ||
| buffer = b"".join(ds["data_compressed"].values) | ||
| data = np.frombuffer(buffer, dtype=np.uint8).reshape(-1, 720) |
There was a problem hiding this comment.
I'll need to double check on the 720. Originally I thought it would always be that, but Colin told me at one point that it was 720 or less. I'll ask Lo at our tag-up on Monday
There was a problem hiding this comment.
👍 Let me know if you need any clarification from me or want me to join that tag-up. It won't be hard to change this, but it is a bit unclear what it is stating currently. Section 8.3.5.3 has an explicit 720 listed in the output. Then the following section states:
There is an exception in this case for the
DATA field, which contains is a N X 720 dimensional array, containing the 720 measurements
made by the star tracker for each packet.
Should these instead be (N, COUNT) dimensionality. For CDF we'd need to make arrays that are shape (N, MAX_COUNT) and fill the unused values.
There was a problem hiding this comment.
@sdhoyt any update on the 720 after your tag-up? Nick seemed to indicate he thought it would always be 720 for this packet.
There was a problem hiding this comment.
Not yet. I'm going to ask in my tag-up tomorrow morning so I'll let you know once I hear.
There was a problem hiding this comment.
@greglucas I confirmed that there will always be 720 data points in the packet
There was a problem hiding this comment.
OK, this is ready for review then!
3686d7d to
be80f46
Compare
d866b22
into
IMAP-Science-Operations-Center:dev
Change Summary
Overview
This adds processing of the Lo Star Sensor packet apid. Since the code is already using
packet_file_to_datasetswe can also remove the previous star sensor dataclasses.@sdhoyt, I am a bit confused reading through the algorithm document, so it would be good to have you double check this, but I am pretty sure we are guaranteed 720 values for the data field. Is that a true statement?
closes #409
closes #410
closes #422
closes #488
closes #939
closes #948