Skip to content

Add Lo star tracker packet processing#1864

Merged
greglucas merged 6 commits intoIMAP-Science-Operations-Center:devfrom
greglucas:lo-star-tracker
Jul 2, 2025
Merged

Add Lo star tracker packet processing#1864
greglucas merged 6 commits intoIMAP-Science-Operations-Center:devfrom
greglucas:lo-star-tracker

Conversation

@greglucas
Copy link
Copy Markdown
Collaborator

@greglucas greglucas commented Jun 27, 2025

Change Summary

Overview

This adds processing of the Lo Star Sensor packet apid. Since the code is already using packet_file_to_datasets we can also remove the previous star sensor dataclasses.

@sdhoyt, I am a bit confused reading through the algorithm document, so it would be good to have you double check this, but I am pretty sure we are guaranteed 720 values for the data field. Is that a true statement?

  • I am doing the decompression right away. Do you want to make an L1a and L1b file where the L1a would not be decompressed?
  • The csv file included in validation data was huge, so I parsed that down to just a few items. Let me know if this isn't acceptable and we could probably change this to use the s3 path instead.

closes #409
closes #410
closes #422
closes #488
closes #939
closes #948

@greglucas greglucas added this to the June 2025 milestone Jun 27, 2025
@greglucas greglucas requested a review from sdhoyt June 27, 2025 16:24
@greglucas greglucas self-assigned this Jun 27, 2025
@greglucas greglucas added Ins: Lo Related to the IMAP-Lo instrument Level: L0 Level 0 processing Level: L1 Level 1 processing labels Jun 27, 2025
Comment thread imap_processing/lo/l1a/lo_l1a.py Outdated
LoAPID.ILO_SCI_DE,
LoAPID.ILO_STAR,
]
apids_with_data = [apid for apid in good_apids if apid in datasets_by_apid]
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reorganized this a bit because I think it would have failed if you had a packet file without some apids. We only want to look at the apids that were included in both of your lists, good and present.

/ "tests/lo/validation_data"
/ "Instrument_FM1_T104_R129_20240803_ILO_STAR_EU_trimmed.csv"
)
validation_arr = np.loadtxt(validation_file, delimiter=",", skiprows=1, dtype=int)
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is easier than pandas read_csv because there are so many columns, so just subset the fields ourselves instead.

@greglucas
Copy link
Copy Markdown
Collaborator Author

The last two commits refactored this a bit and separated the values out into two datasets now, an l1a and l1b. I think it makes sense to do both of these at the same time since it is such a minor change from l1a to l1b so I don't think we need to create a separate processing job for it.

The l1b dataset is currently called prostar, is that what they want or does l1b_star indicate it is processed?

@sdhoyt
Copy link
Copy Markdown
Contributor

sdhoyt commented Jun 27, 2025

The last two commits refactored this a bit and separated the values out into two datasets now, an l1a and l1b. I think it makes sense to do both of these at the same time since it is such a minor change from l1a to l1b so I don't think we need to create a separate processing job for it.

The l1b dataset is currently called prostar, is that what they want or does l1b_star indicate it is processed?

@greglucas I believe the L1A star data should be decompressed. We are decompressing the other data products in L1A as well. But yes, the plan was to use prostar for l1b

# This assumes that all data_compressed entries are of the same length
# but allows for only one frombuffer call
buffer = b"".join(ds["data_compressed"].values)
data = np.frombuffer(buffer, dtype=np.uint8).reshape(-1, 720)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll need to double check on the 720. Originally I thought it would always be that, but Colin told me at one point that it was 720 or less. I'll ask Lo at our tag-up on Monday

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Let me know if you need any clarification from me or want me to join that tag-up. It won't be hard to change this, but it is a bit unclear what it is stating currently. Section 8.3.5.3 has an explicit 720 listed in the output. Then the following section states:

There is an exception in this case for the
DATA field, which contains is a N X 720 dimensional array, containing the 720 measurements
made by the star tracker for each packet.

Should these instead be (N, COUNT) dimensionality. For CDF we'd need to make arrays that are shape (N, MAX_COUNT) and fill the unused values.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sdhoyt any update on the 720 after your tag-up? Nick seemed to indicate he thought it would always be 720 for this packet.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not yet. I'm going to ask in my tag-up tomorrow morning so I'll let you know once I hear.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@greglucas I confirmed that there will always be 720 data points in the packet

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, this is ready for review then!

Copy link
Copy Markdown
Contributor

@sdhoyt sdhoyt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for doing this!

@greglucas greglucas merged commit d866b22 into IMAP-Science-Operations-Center:dev Jul 2, 2025
14 checks passed
@greglucas greglucas deleted the lo-star-tracker branch July 2, 2025 19:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Ins: Lo Related to the IMAP-Lo instrument Level: L0 Level 0 processing Level: L1 Level 1 processing

Projects

None yet

2 participants