-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(ingestion/mongodb) re-order aggregation logic #12428
base: master
Are you sure you want to change the base?
feat(ingestion/mongodb) re-order aggregation logic #12428
Conversation
Remove an unnecessary comma in the MongoDB aggregation logic to ensure consistency in the code style. This change does not affect functionality but improves code readability.
Refactor the MongoDB document aggregation process by consolidating the aggregation call into a single location, regardless of the sample size condition. This change enhances code clarity and maintains functionality.
Refactor the MongoDB document aggregation logic to streamline the handling of sampling conditions. The changes ensure that the aggregation process is clearer by consolidating the logic for random sampling and limiting sample size into a more cohesive structure, enhancing code readability while maintaining existing functionality.
Reorganize the logic for adding a document size filter in the MongoDB aggregation process. The changed aggregation order improves MongoDB scanning performance.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Continue to review full report in Codecov by Sentry.
|
…ndom sampling - Introduced a new golden JSON file for MongoDB ingestion without random sampling. - Updated the test suite to include a pipeline that verifies the ingestion process. - Ensured the output is validated against the new golden file to maintain data integrity. This commit enhances the testing framework for MongoDB ingestion, ensuring that the functionality works as expected without random sampling.
@@ -218,26 +218,25 @@ def construct_schema_pymongo( | |||
""" | |||
|
|||
aggregations: List[Dict] = [] | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the order of the aggregations impacts execution time. By setting the sample/limit aggregation first, the subsequent aggregations process a much smaller dataset. Is my understanding correct? Could you add a brief code comment to highlight this?
I fix the linked issue by reordering aggregation logics.
It would be sampling first then measuring doc size and filtering so much faster.
Checklist
should_add_document_size_filter
#12427