-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor volume undo/redo mechanism and fix subtle bugs #7506
Conversation
…tLabeled event in throttled manner
…ot change since the last action in a brush/trace action
…the bucket exist; add more todo comments to clarify complex scenarios
…actor-volume-undo
📝 WalkthroughWalkthroughThis pull request refines various parts of the codebase without altering core logic. It updates comments to clarify unused properties and shifts certain inline functions to well-named constants. Several methods and type declarations are refactored for improved clarity, type safety, and consistency—such as renaming state methods (from “markAsPulled” to “markAsRequested”), consolidating action parameters, and introducing a new Changes
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (1)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
…inish bucket requests when needed for snapshots (might still be better to store the requested version but let's not complicate this PR further)
…re race condition when undoing/redoing gc'ed buckets)
…actor-volume-undo
…actor-volume-undo
@daniel-wer I hope I can burden you with the review. Would probably make sense to have a call. I remember that we already talked about this topic a while ago, so you might be interested in this. if you don't have the time, feel free to delegate :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (1)
frontend/javascripts/oxalis/model/sagas/undo_saga.ts (1)
609-656
: 🛠️ Refactor suggestionAdd error handling for bucket restoration in undo operation.
If
bucket.restoreToSnapshot(bucketSnapshot)
throws an error, the saga might leave the system in an inconsistent state. Introducing error handling or rollback logic could ensure atomic undo behavior.
🧹 Nitpick comments (14)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts (3)
58-79
: Consider handling potential compression errors.
startCompression()
callscompressTypedArray()
without a.catch()
block. In case of compression failure, an unhandled promise rejection could lead to runtime errors. Adding minimal error handling or logs could improve reliability.
95-106
: Decompression flow should also catch errors.
Similar tostartCompression()
, consider handling decompression failures to prevent unhandled promise rejections.
151-170
: Type ignore onoriginalData.set(backendData)
deserves a double-check.
Relying on@ts-ignore
may mask potential BigInt vs. regular TypedArray mismatch or out-of-bounds issues. Confirm that the typed arrays will always be compatible.frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts (1)
26-75
: Consider limiting or logging the queue size.In large or complex tests, the compression/decompression queues might grow excessively if the test steps don’t call the corresponding resolve methods often enough. Logging or capping the queue length could help detect potential memory leaks or test logic issues.
frontend/javascripts/oxalis/model/sagas/undo_saga.ts (1)
249-249
: Potential improvement for user feedback.The warning message “Cannot redo at the moment. Please try again.” does not explicitly clarify the cause (e.g., because of an active volume operation). Consider showing a more descriptive message to guide users on how to proceed.
Also applies to: 277-277
frontend/javascripts/oxalis/model/actions/volumetracing_actions.ts (2)
14-14
: Leverage type-based documentation.
BucketSnapshot
is central to undo operations. Consider adding a short doc comment for the parameter to help maintainers quickly grasp how snapshots integrate into the action workflow.
33-33
: Maintain consistent naming.
MaybeUnmergedBucketLoadedPromise
carries a descriptive name, yet it's quite lengthy. If frequently used throughout, consider a shorter alias (e.g.MaybeBucketPromise
) to improve readability.frontend/javascripts/oxalis/model/bucket_data_handling/pullqueue.ts (3)
29-34
: Enhance encapsulation by making properties private.
Switching from public to private fields fosters improved maintainability. Ensure there's adequate testing for any external references to these properties.
115-115
: Preserve consistent queue states.
Re-adding dirty buckets with highest priority is a good approach, but verify we guard against repeated re-queuing if the bucket remains dirty.
171-171
: Potential duplication in queue.
The comment properly warns about duplicates. If duplicates become frequent, you might want to deduplicate items prior to enqueuing for performance.frontend/javascripts/test/model/binary/pullqueue.spec.ts (2)
37-37
: Explicit test identifier.
HavingtracingId = "volumeTracingId"
is helpful for unique references in logs.
57-59
: Dynamic re-require usage.
Re-requiringpullqueue
ensures updated mocks. This can sometimes carry overhead if repeated; measure if test performance becomes a concern.frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts (2)
79-79
: ExpandedDataBucket
class.
The introduction of new fields and logic for merging data fosters a robust, maintainable design. Keep an eye on the class size to avoid bloat.
188-188
: Strategic debugger invocation.
this._debuggerMaybe();
is helpful for diagnosing unexpected bucket states. Just be sure it’s absent in production builds or toggled via config.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (26)
frontend/javascripts/admin/api/mesh.ts
(1 hunks)frontend/javascripts/libs/async/deferred.ts
(1 hunks)frontend/javascripts/oxalis/api/api_latest.ts
(2 hunks)frontend/javascripts/oxalis/model/actions/volumetracing_actions.ts
(3 hunks)frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts
(18 hunks)frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts
(1 hunks)frontend/javascripts/oxalis/model/bucket_data_handling/data_cube.ts
(4 hunks)frontend/javascripts/oxalis/model/bucket_data_handling/data_rendering_logic.tsx
(1 hunks)frontend/javascripts/oxalis/model/bucket_data_handling/pullqueue.ts
(10 hunks)frontend/javascripts/oxalis/model/bucket_data_handling/temporal_bucket_manager.ts
(1 hunks)frontend/javascripts/oxalis/model/helpers/bucket_compression.ts
(1 hunks)frontend/javascripts/oxalis/model/helpers/typed_buffer.ts
(1 hunks)frontend/javascripts/oxalis/model/sagas/clip_histogram_saga.ts
(1 hunks)frontend/javascripts/oxalis/model/sagas/undo_saga.ts
(13 hunks)frontend/javascripts/oxalis/model/sagas/update_actions.ts
(2 hunks)frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx
(1 hunks)frontend/javascripts/oxalis/store.ts
(1 hunks)frontend/javascripts/test/model/binary/cube.spec.ts
(4 hunks)frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts
(1 hunks)frontend/javascripts/test/model/binary/pullqueue.spec.ts
(5 hunks)frontend/javascripts/test/model/binary/temporal_bucket_manager.spec.ts
(3 hunks)frontend/javascripts/test/model/texture_bucket_manager.spec.ts
(1 hunks)frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts
(1 hunks)frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration.spec.ts
(2 hunks)frontend/javascripts/types/api_flow_types.ts
(1 hunks)frontend/stylesheets/trace_view/_action_bar.less
(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx (1)
Learnt from: dieknolle3333
PR: scalableminds/webknossos#8168
File: frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx:433-434
Timestamp: 2025-03-19T13:07:43.053Z
Learning: In the codebase, certain usages of `segmentationLayer.resolutions` are intentionally retained and should not be changed to `segmentationLayer.mags` during refactoring.
🧬 Code Definitions (10)
frontend/javascripts/admin/api/mesh.ts (1)
frontend/javascripts/oxalis/constants.ts (1)
Vector3
(13-13)
frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx (2)
frontend/javascripts/libs/mjs.ts (1)
V3
(399-399)frontend/javascripts/oxalis/model/actions/volumetracing_actions.ts (1)
addToLayerAction
(168-172)
frontend/javascripts/test/model/binary/temporal_bucket_manager.spec.ts (1)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts (1)
DataBucket
(79-798)
frontend/javascripts/oxalis/api/api_latest.ts (1)
frontend/javascripts/types/schemas/datasource.types.ts (1)
DataLayer
(53-84)
frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts (1)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts (2)
PendingOperation
(6-6)BucketSnapshot
(13-149)
frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration.spec.ts (3)
frontend/javascripts/test/helpers/apiHelpers.ts (1)
createBucketResponseFunction
(38-50)frontend/javascripts/oxalis/constants.ts (1)
Vector3
(13-13)frontend/javascripts/oxalis/model/actions/volumetracing_actions.ts (4)
setActiveCellAction
(194-206)startEditingAction
(161-166)addToLayerAction
(168-172)finishEditingAction
(189-192)
frontend/javascripts/oxalis/model/bucket_data_handling/pullqueue.ts (2)
frontend/javascripts/oxalis/store.ts (1)
DataStoreInfo
(188-188)frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts (1)
DataBucket
(79-798)
frontend/javascripts/oxalis/model/helpers/bucket_compression.ts (1)
frontend/javascripts/oxalis/model/helpers/typed_buffer.ts (1)
uint8ToTypedBuffer
(53-65)
frontend/javascripts/oxalis/model/actions/volumetracing_actions.ts (1)
frontend/javascripts/types/api_flow_types.ts (1)
BucketDataArray
(48-57)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts (5)
frontend/javascripts/types/api_flow_types.ts (2)
ElementClass
(33-44)BucketDataArray
(48-57)frontend/javascripts/oxalis/constants.ts (1)
BucketAddress
(22-24)frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts (2)
PendingOperation
(6-6)BucketSnapshot
(13-149)frontend/javascripts/oxalis/model/actions/volumetracing_actions.ts (1)
MaybeUnmergedBucketLoadedPromise
(33-33)frontend/javascripts/oxalis/model/bucket_data_handling/pullqueue.ts (1)
PullQueueConstants
(15-20)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: circleci_build
🔇 Additional comments (82)
frontend/javascripts/test/model/texture_bucket_manager.spec.ts (1)
28-28
: Method renaming for state management clarity.The change from
markAsPulled()
tomarkAsRequested()
better represents that the bucket is being marked as requested from the data source rather than already pulled. This terminology is clearer and more accurate about the bucket's state.frontend/stylesheets/trace_view/_action_bar.less (1)
74-77
: Good UI fix for button alignment during loading state.The added
vertical-align: 0;
property prevents layout shifts when the undo-redo button enters a loading state. This improves user experience by maintaining consistent button positioning throughout state transitions.frontend/javascripts/oxalis/model/bucket_data_handling/data_rendering_logic.tsx (1)
8-8
: Import refactoring for better type organization.The
TypedArrayConstructor
type import has been moved from./bucket
to../helpers/typed_buffer
, which improves code organization by consolidating type-related functionality in a dedicated module.frontend/javascripts/oxalis/model/sagas/clip_histogram_saga.ts (1)
11-11
: Consistent import path update.The import for
getConstructorForElementClass
has been updated to reflect its new location in the../helpers/typed_buffer
module, maintaining consistency with other similar changes in the codebase.frontend/javascripts/admin/api/mesh.ts (1)
21-22
: Good documentation of unused properties.Adding comments to mark unused properties in the
MeshSegmentInfo
type is helpful for code clarity. This helps developers understand that these fields are defined but not currently utilized in the codebase.frontend/javascripts/oxalis/api/api_latest.ts (2)
620-620
: Good refactoring by extracting the inline function into a named constant.Extracting the inline function
() => true
into a named constanttruePredicate
improves code readability and maintainability. This follows the principle of giving meaningful names to "magic" values/functions.
628-628
: Using the named constant for better readability.Using the extracted
truePredicate
constant instead of an inline function makes the code more self-documenting and potentially avoids creating a new function on each execution.frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx (1)
285-289
: Good optimization to skip redundant processing.This optimization check prevents unnecessary operations when the mouse moves within the same voxel. By comparing the current position with the last position using
V3.equals()
, the code can skip processing when no meaningful change has occurred, which should improve performance.frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts (2)
242-242
: Updated method name frommarkAsPulled()
tomarkAsRequested()
.This change aligns with the refactoring of the bucket handling logic, making the state management terminology more precise and consistent. The test has been correctly updated to reflect the API changes in the
DataBucket
class.
245-245
: Updated method name frommarkAsPulled()
tomarkAsRequested()
.Similar to the previous change, this updates the method name to match the refactored API in the
DataBucket
class, ensuring the test correctly reflects the current implementation.frontend/javascripts/oxalis/model/bucket_data_handling/temporal_bucket_manager.ts (1)
29-31
: Improved abstraction by delegating queue operations to bucketThis change encapsulates the pull queue priority logic within the
DataBucket
class rather than handling it directly in theTemporalBucketManager
. This is a good improvement as it follows better object-oriented design principles by allowing the bucket to manage its own interaction with the pull queue.frontend/javascripts/types/api_flow_types.ts (1)
46-57
: Well-defined type for bucket data arraysThe introduction of the
BucketDataArray
type provides a clear, centralized definition for the various typed arrays that can be used for bucket data. The comment explicitly indicates when this type needs to be updated, which is helpful for future maintenance. This type will make the code more robust when handling different numeric formats across the application.frontend/javascripts/oxalis/store.ts (1)
195-201
: Helpful documentation for version handlingThe added comments provide valuable context about how the version field behaves, particularly in conflict scenarios. This documentation clarifies that this field represents the "most recently saved" version from the front-end perspective and explains the distinction between stored versions and unsaved changes.
frontend/javascripts/libs/async/deferred.ts (1)
2-3
: Fixed TypeScript strict initialization with definite assignment assertionsThe addition of the definite assignment operator (
!
) properly satisfies TypeScript's strict initialization rules. Since these properties are guaranteed to be initialized in the Promise constructor callback, this change correctly tells the TypeScript compiler that these properties will be assigned before use, without changing runtime behavior.frontend/javascripts/test/model/binary/cube.spec.ts (5)
101-101
: Method name updated for clarity.The method renamed from
markAsPulled()
tomarkAsRequested()
better reflects the actual state transition - the bucket is being marked as requested rather than pulled. This naming is more precise and aligns with the bucket lifecycle.
173-173
: Method name updated for consistency.This change maintains consistency with the earlier renaming pattern, updating
markAsPulled()
tomarkAsRequested()
throughout the test cases.
217-217
: Test name updated to match revised method name.The test name has been updated to use
mayBeGarbageCollected()
instead ofshouldCollect()
, maintaining alignment with the renamed method being tested.
222-222
: Method names updated for more descriptive terminology.Both changes follow the same pattern of making the method names more descriptive:
markAsPulled()
→markAsRequested()
shouldCollect()
→mayBeGarbageCollected()
These changes improve code readability by using terminology that more clearly communicates the intent of each method.
Also applies to: 226-226
243-243
: Consistent method naming.This change completes the renaming pattern throughout the file, replacing
markAsPulled()
withmarkAsRequested()
in the loop that prevents garbage collection for all buckets.frontend/javascripts/test/model/binary/temporal_bucket_manager.spec.ts (4)
67-67
: Method name updated for clarity.The method renamed from
markAsPulled()
tomarkAsRequested()
better reflects the state being set - the bucket is being marked as requested rather than pulled. This matches the changes made to theDataBucket
class.
75-76
: Method name updated with proper state verification.Renaming
markAsPulled()
tomarkAsRequested()
maintains consistency with the actual state model. The test also correctly verifies the state transition by checkingisLoaded()
after receiving data on line 76.
85-86
: Method name updated maintaining test sequence.This change maintains the test sequence logic: marking the bucket as requested, then receiving data, and finally verifying that the manager count is zero after the bucket is loaded.
99-99
: Consistent method naming in helper function.The
prepareBuckets
helper function has been updated to usemarkAsRequested()
instead ofmarkAsPulled()
, ensuring consistency throughout the test file.frontend/javascripts/oxalis/model/helpers/typed_buffer.ts (3)
4-13
: Well-defined type for typed array constructors.The
TypedArrayConstructor
type provides a clear union of all possible typed array constructors, making type checking and IDE auto-completion more robust. This enhances code safety when working with different array types.
15-51
: Centralized mapping for element class to array constructor.Good implementation of a mapping function that converts element class names to appropriate typed array constructors and channel counts. This logic was likely scattered or duplicated before, and centralizing it improves maintainability.
The switch statement is comprehensive, covering all supported element types with appropriate comments for special cases like
uint24
.
53-65
: Utility function simplifies typed buffer creation.The
uint8ToTypedBuffer
function encapsulates the logic for converting betweenUint8Array
and other typed arrays. The function handles both the case where the buffer is provided (conversion) and where it's not (creation).This abstraction will reduce code duplication and make buffer handling more consistent across the codebase.
frontend/javascripts/oxalis/model/helpers/bucket_compression.ts (3)
3-4
: Updated imports for the new typed buffer utilities.The imports have been updated to include:
- The
ElementClass
type for proper typing- The new
uint8ToTypedBuffer
utility functionThis reflects the function parameter changes and leverages the new centralized typed buffer handling.
10-11
: Improved function signature with direct elementClass parameter.The function now takes an
elementClass
parameter instead of abucket
object, which is a better design as it:
- Only requires the specific data needed (elementClass) rather than the entire bucket
- Reduces coupling between the compression utilities and bucket implementation
- Makes the function more reusable in contexts without bucket objects
13-13
: Utilizes the new centralized typed buffer conversion.The implementation now uses the dedicated
uint8ToTypedBuffer
utility instead of a method on the bucket. This change:
- Aligns with the new function signature
- Takes advantage of the centralized typed buffer handling
- Makes the code more maintainable as buffer conversion logic is now in one place
This is a good example of proper separation of concerns.
frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts (6)
1-12
: All imports and type definitions look good.
No immediate issues regarding correctness, naming, or maintainability.
13-32
: Class field declarations are well-structured.
All fields have clear, descriptive names and consistent typing. The separation between local vs. backend data (uncompressed vs. compressed) is clear.
39-56
: Constructor logic is consistent.
Initializing and marking whether backend data needs merging is well done. Consider handling potential errors (e.g., null or invalid dataClone) upfront, although it’s likely not an issue if upstream calls always provide valid data.
81-89
: Local data retrieval is well-defined.
This method correctly differentiates between uncompressed data (this.dataClone
) and compressed data.
91-93
: Backend data availability check looks straightforward.
This helper improves code readability without overhead.
108-148
: Race conditions appear well-managed, but verify concurrency.
getDataForRestore()
merges local data with backend data only if both are ready. This is logical, but consider verifying how multiple calls behave if the promise or local data are modified concurrently or if data arrives late.frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration.spec.ts (2)
469-469
: No concerns about the added blank line.
663-716
: New test covers undo/redo and garbage collection properly.
This test scenario is thorough, especially with bucket collection before undoing and redoing. Great addition to ensure correct changes persist across GC boundaries.frontend/javascripts/oxalis/model/sagas/update_actions.ts (2)
563-563
: Added comment clarifies usage.
IfagglomerateId
is truly unused in the backend, consider removing it altogether or referencing use cases where it might become relevant. Otherwise, the documentation helps avoid confusion.
597-598
: Unused parameters are now documented.
Likewise, ifagglomerateId1
andagglomerateId2
remain unused, removing them may be preferable unless future expansions need them. The comment is helpful in clarifying current usage.frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts (2)
77-113
: PlanExecutor logic appears coherent.The step-by-step design and error throwing for unexpected actions provide good coverage. No immediate issues found here.
267-299
: Thorough final state checks in repeated restore operations.This approach tests both unmerged and merged data states across multiple restore calls, ensuring correctness in complex asynchronous scenarios.
frontend/javascripts/oxalis/model/sagas/undo_saga.ts (1)
187-200
: Clear concurrency safeguard for ongoing volume operations.By verifying that all current bucket snapshots are empty before allowing undo/redo, you help prevent partial or conflicting state changes. This protective measure looks well-implemented.
frontend/javascripts/oxalis/model/bucket_data_handling/data_cube.ts (2)
321-371
: Refined garbage collection logic respects recently accessed buckets.Switching to
mayBeGarbageCollected(true)
effectively prevents collecting buckets still in use, aligning with the intended design. No concerns found here.
373-419
: Validate potential partial fetch scenarios when aborting requests.Aborting in-flight pull requests prevents outdated data returns. However, confirm that buckets left “failed” can be re-requested immediately without issues or unexpected states.
frontend/javascripts/oxalis/model/actions/volumetracing_actions.ts (3)
350-353
: Well-structured undo action
Consolidating parameters into abucketSnapshot
is a cleaner approach and simplifies the action payload. Looks good!
2-2
:✅ Verification successful
Consider verifying unused imports.
While this line introduces a type import, please ensure these types are being utilized. If not, consider removing to maintain a lean import structure.Would you like to scan the file and confirm this import is used?
🏁 Script executed:
#!/bin/bash rg --context 4 'ContourMode|OrthoView|Vector2|Vector3' frontend/javascripts/oxalis/model/actions/volumetracing_actions.tsLength of output: 2761
All imported types are in active use.
The scan confirms that each of the imported types (ContourMode, OrthoView, Vector2, and Vector3) are used throughout the file in various function parameters. There’s no evidence of any unused import, so no modifications are needed.
8-12
:✅ Verification successful
Ensure correct usage of combined imports.
The code now imports multiple entities fromtypes/api_flow_types
. Confirm all named imports (BucketDataArray
,ServerEditableMapping
,ServerVolumeTracing
) are used or else remove the unused ones.
🏁 Script executed:
#!/bin/bash rg --context 4 'ServerEditableMapping|ServerVolumeTracing|BucketDataArray' frontend/javascripts/oxalis/model/actions/volumetracing_actions.tsLength of output: 1558
Combined Import Usage Verified
- The named imports from
"types/api_flow_types"
—BucketDataArray, ServerEditableMapping, and ServerVolumeTracing—are all confirmed to be in use:
- BucketDataArray is utilized in the
MaybeUnmergedBucketLoadedPromise
type alias.- ServerVolumeTracing is passed as a parameter to the
initializeVolumeTracingAction
function.- ServerEditableMapping is required by the
initializeEditableMappingAction
function.- Additionally, note that AdditionalCoordinate is imported separately from the same module. If it’s used elsewhere in the file, its separate import is acceptable; however, if it’s not needed or can be merged for consistency, consider updating the import statement.
frontend/javascripts/oxalis/model/bucket_data_handling/pullqueue.ts (9)
9-9
: Great use of typed import.
ImportingDataBucket
clarifies function signatures.
51-54
: Loop concurrency logic.
Confirm correct usage of the while condition. IfpriorityQueue.length
is i.e. 0, the loop halts even thoughfetchingBatchCount < BATCH_LIMIT
. That’s intended, but confirm no edge cases exist where an empty queue triggers repeated checks unnecessarily.
63-64
: Excellent usage ofmarkAsRequested
.
This ensures a clear state transition fromUNREQUESTED
toREQUESTED
.
78-80
: Controlling concurrent batch fetches.
IncrementingfetchingBatchCount
insidepullBatch
is consistent with the concurrency limit. Looks good.
95-95
: Confirm all references tocube.getOrCreateBucket
.
Ensure that call sites expectingbucket
asDataBucket
handle the possibility of returning aNullBucket
for invalid addresses.
104-104
: Centralize logic inhandleBucket
.
InvokinghandleBucket
fosters separation of concerns, makingpullBatch
more readable.
131-131
: Managing fetch counters.
DecrementingfetchingBatchCount
ensures concurrency tracking remains accurate.
137-138
: Informative comments for scheduling a retry.
Adding a note about parallel batches helps future maintainers grasp the rationale forisRetryScheduled
.
158-167
: Eager vs. lazy value-set computation.
This conditionally callingreceiveData
withcomputeValueSet = true
is an elegant approach that can improve user-facing performance. Great job!frontend/javascripts/test/model/binary/pullqueue.spec.ts (6)
3-3
: Helpful LZ4 mock.
Retaining the lz4 mock is vital for consistent test outcomes.
6-7
: Improving type safety withanyTest
andTestFn
.
Great step toward robust typed tests.
10-10
: Check error callback usage.
Verify that both resolve and reject paths inpromise.then(func, func)
are well-tested to confirm coverage.
26-26
: Consistent mock templates.
DefiningmockedCubeTemplate
clarifies default properties for your test cubes. Maintain it consistently across similar test files.
45-47
: Accurate store mocking for volume arrays.
This structure better reflects the real store shape. Good improvement.
66-69
: Clear test context definitions.
Specifyingbuckets
andpullQueue
on the test context is thoroughly typed, improving reliability.frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts (20)
16-16
: Remove or replace type imports that are unused.
Ensure bothBucketDataArray
andElementClass
are referenced in this file.
20-20
: Typed buffer helpers are beneficial.
getConstructorForElementClass
anduint8ToTypedBuffer
can reduce boilerplate in future expansions.
21-21
: Robust snapshot handling.
Importing bothBucketSnapshot
andPendingOperation
clarifies the relationship between bucket data and snapshot logic.
49-49
: TypedNullBucket
.
readonly type = "null" as const;
is a clean, safe approach for differentiating fromDataBucket
.
70-70
: Composing multiple bucket types.
CombiningDataBucket
andNullBucket
underBucket
is straightforward. Good type union usage.
80-82
: Immutable bucket identity.
readonly type = "data"
,readonly elementClass
, andreadonly zoomedAddress
promote mutable data with stable top-level identity. Nicely done.
95-95
: Structured pending operations.
Explicitly typedArray<PendingOperation>
is easier to reason about than a generic function array.
97-97
: Private usage ofaccessed
.
It’s now fully controlled by getters or public methods if needed. Encourages correct usage.
105-105
: Deferred backend merge tracking.
maybeUnmergedBucketLoadedPromise
indicates an asynchronous potential for new data. Good naming choice.
178-180
: Garbage collection gating.
Conditionally skipping theaccessed
flag is a nice approach if some scenarios require ignoring usage state.
303-303
: Snapshot integration with Redux.
DispatchingaddBucketToUndoAction
here is seamless. Perfect for hooking into the undo logic.
306-309
: Granular snapshot creation.
getSnapshotBeforeMutation
clarifies snapshot timing. Good naming for code readability.
310-315
: Asynchronous snapshot usage.
Waiting forensureLoaded()
ensures consistent data prior to snapshot, preventing partial snapshots.
317-353
: Comprehensive snapshot logic.
The method covers both in-flight data merges and eager data requests. Thorough approach!
355-373
: Restoring bucket from snapshot.
Merging new data, pending ops, and final states is well-handled. This is a key feature for undo/redo reliability.
389-389
: Atomic data setting.
setData
method updates the state, flagsdirty
, triggers push queue, and refreshes the UI. Straightforward.
557-558
: State fallback logic.
Switching toBucketStateEnum.MISSING
or repositioning toUNREQUESTED
clarifies the boundary between a missing and a re-pullable bucket.
572-574
: Double data assignment warning.
This_logMaybe
call helps detect unintentionalreceiveData
calls. Good debugging safeguard.
755-760
: Bucket-address-based debug approach.
Conditionally triggering a debugger or log is a nice pattern for singled-out analysis in complex systems.
792-798
: Consistent method for highest-priority pull.
addToPullQueueWithHighestPriority
centralizes re-queuing logic. Clear and maintainable.
frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts
Outdated
Show resolved
Hide resolved
Sure, let's have a call once you're back in office :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts (1)
318-353
: 🛠️ Refactor suggestionEvaluate the
getSnapshot
synchronization approach
getSnapshot
can trigger a backend request ifdata
is null. For very large data, this might block or chain the request unexpectedly. Consider an asynchronous or lazy approach (similar to compression) to avoid performance hits during snapshot creation.
🧹 Nitpick comments (7)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts (3)
1-11
: Ensure.catch()
or error handling on compression Promises
There is no.catch()
usage for the compression Promises returned bycompressTypedArray()
anddecompressToTypedArray()
. If an error occurs during compression or decompression (e.g., data corruption or memory issues), the code might silently fail.+ // Example usage: + this.compressor.compressTypedArray(this.dataClone) + .then((compressedData) => { + this.compressedData = compressedData; + this.dataClone = null; + }) + .catch((err) => { + console.error("Compression error:", err); + // Optionally forward err to a centralized error handler + });
37-53
: Consider deferring or batching compression
startCompression()
is invoked in the constructor, causing compression to happen as soon as the instance is created. For large data sets or many consecutive instantiations, this might introduce a performance bottleneck. Consider a lazy or batched compression approach to prevent blocking the main thread.
78-86
: Refine error message when no local data is available
The error message"BucketSnapshot has neither data nor compressedData."
might be ambiguous. Consider clarifying why data might be missing or what the caller can do to remedy the situation.- throw new Error("BucketSnapshot has neither data nor compressedData."); + throw new Error( + "BucketSnapshot found no local or compressed data. Ensure the data is loaded or compression has completed." + );frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts (4)
95-105
: Ensure proper re-initialization of pending states
pendingOperations: Array<PendingOperation>
andmaybeUnmergedBucketLoadedPromise
both manage in-flight or deferred logic. Consider resetting these properties when the bucket transitions from a dirty to a fully saved state to avoid stale operations.
178-185
: Clarify garbage-collection logic
The newmayBeGarbageCollected(respectAccessedFlag: boolean)
method might benefit from more descriptive naming or docstrings, especially regarding the interplay betweenaccessed
,dirty
, anddirtyCount
. Improving clarity can reduce confusion for future maintainers.
390-390
: Preserve bucket immutability or minimize side effects
setData(newData: BucketDataArray, newPendingOperations: Array<PendingOperation>)
directly replacesthis.data
andthis.pendingOperations
. If other parts of the code hold references to these arrays, ensure no concurrency or unintended side effects occur.
793-798
: Avoid overshadowing pull queue priorities
addToPullQueueWithHighestPriority()
might overshadow existing priority-based logic. If multiple buckets are inserted atPRIORITY_HIGHEST
, carefully confirm that it doesn't clog the pull queue or starve lower-priority buckets.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts
(18 hunks)frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts
(1 hunks)frontend/javascripts/types/api_flow_types.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/javascripts/types/api_flow_types.ts
🧰 Additional context used
🧬 Code Definitions (1)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts (5)
frontend/javascripts/types/api_flow_types.ts (2)
ElementClass
(33-44)BucketDataArray
(48-57)frontend/javascripts/oxalis/constants.ts (1)
BucketAddress
(22-24)frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts (2)
PendingOperation
(6-6)BucketSnapshot
(13-146)frontend/javascripts/oxalis/model/actions/volumetracing_actions.ts (1)
MaybeUnmergedBucketLoadedPromise
(33-33)frontend/javascripts/oxalis/model/bucket_data_handling/pullqueue.ts (1)
PullQueueConstants
(15-20)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: circleci_build
🔇 Additional comments (6)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts (6)
50-50
: Avoid leftover debug code
Based on previous review comments, there's a request to remove or refactor debug calls liketype = "null" as const
. Confirm if it's still relevant or if this line needs rewriting.
303-309
: Verify that undo logic triggers only once
Store.dispatch(addBucketToUndoAction(this.getSnapshotBeforeMutation()))
is guarded by a check to ensure the snapshot is only created once per volume transaction. Double-check that no race condition can cause multiple snapshots for the same transaction.
356-374
: Check state consistency after restore
restoreToSnapshot
sets new data and logs a message if the bucket is still inUNREQUESTED
state. Ensure that any subsequent merges or pushes respect this potential mismatch, to avoid overwriting newly restored data.
573-575
: Double-check partial loads
receiveData
logs a warning ifthis.data
exists. There's a potential scenario where partial data has been loaded or merged. Confirm that no repeated calls (or missing merges) occur if data is reloaded from the server.
749-760
: Remove leftover debug calls
Lines 749 (_logMaybe
) and 757–760 (_debuggerMaybe
) still contain debug logic for a specific address[19, 30, 16, 0]
. Previous comments suggest removing or gating these logs. Make sure they're required for production or wrap them in a recognized debug flag.
16-20
:❓ Verification inconclusive
Streamline new imports
Imports fromapi_flow_types
andbucket_snapshot.ts
look coherent. Make sure these are actually needed across all usage contexts. Unused imports can sometimes appear due to refactors.Please check whether all newly added imports (
ElementClass
,BucketDataArray
,PendingOperation
, etc.) are indeed referenced.
🏁 Script executed:
#!/bin/bash rg -A 2 "ElementClass" rg -A 2 "BucketDataArray" rg -A 2 "PendingOperation"Length of output: 80264
Optimize Import Usage
Based on the verification:
• The global search confirms thatElementClass
is widely used across the codebase, so its import here seems justified.
• In contrast, neitherBucketDataArray
norPendingOperation
produced additional usage references outside their import statements in this file.Please double-check the complete implementation in
frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts
to ensure that these two imports are indeed needed (for example, as type annotations) before deciding to keep or remove them.
frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a very enjoyable read! Big improvement, thank you for investing the time and effort to get this done. I'll do some testing tomorrow and will report afterwards 🐒
frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts
Outdated
Show resolved
Hide resolved
frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts
Outdated
Show resolved
Hide resolved
frontend/javascripts/oxalis/model/bucket_data_handling/data_cube.ts
Outdated
Show resolved
Hide resolved
frontend/javascripts/oxalis/model/bucket_data_handling/data_cube.ts
Outdated
Show resolved
Hide resolved
frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts
Outdated
Show resolved
Hide resolved
frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts
Outdated
Show resolved
Hide resolved
No issues at all during my monkey testing 🎉 |
frontend/javascripts/oxalis/model/bucket_data_handling/data_cube.ts
Outdated
Show resolved
Hide resolved
frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts
Outdated
Show resolved
Hide resolved
Co-authored-by: Daniel <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts (1)
254-254
:⚠️ Potential issueInconsistent usage of async calls
The call to
planExecutor.executeStep("beforeInstantiation")
is not awaited, potentially causing race conditions or incomplete setup. Please consider addingawait
to ensure each step completes before proceeding.- planExecutor.executeStep("beforeInstantiation"); + await planExecutor.executeStep("beforeInstantiation");
🧹 Nitpick comments (1)
frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts (1)
204-205
: Clarify comment about decompressionAs suggested in a previous review, the comment could be clearer.
- // For the second read, local and backend data need to be decompressed - // again. + // For the second read, local and backend data need to be decompressed
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
frontend/javascripts/oxalis/model/bucket_data_handling/data_cube.ts
(4 hunks)frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts
(1 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts (1)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts (2)
PendingOperation
(6-6)BucketSnapshot
(13-146)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: circleci_build
🔇 Additional comments (14)
frontend/javascripts/oxalis/model/bucket_data_handling/data_cube.ts (6)
23-23
: LGTM: Enhanced code organization.The type-only import for
Bucket
improves separation between types and runtime code.
35-35
: LGTM: Consolidated type imports.Reorganizing to import
BucketDataArray
along with other types fromapi_flow_types
improves import organization.
327-333
: Improved garbage collection logic.The replacement of
shouldCollect()
withmayBeGarbageCollected(true)
enhances the garbage collection logic by explicitly respecting the accessed flag. This ensures buckets recently used for rendering aren't prematurely collected.
377-392
: LGTM: Enhanced code clarity with detailed comments.The added comments thoroughly explain the context in which this method is called and the rationale for aborting requests rather than clearing the queue. This makes the code's intentions much clearer.
396-396
: Remove debug method.This debug method should be removed as suggested in previous review comments.
397-410
: Improved bucket collection logic.The enhanced predicate for bucket collection now properly checks whether a bucket can be garbage collected before attempting to collect it. This prevents the collection of buckets that are still in use or contain unsaved data.
frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts (8)
1-10
: Well-structured module documentationThe comprehensive documentation at the beginning of the file clearly explains the purpose and approach of this test module. This is excellent practice for maintainability.
26-75
: Good implementation of MockCompressorThe MockCompressor class provides a robust way to control asynchronous compression and decompression operations during tests. The queue-based design with explicit resolution methods makes the tests deterministic and easier to understand.
107-108
: Consider removing sleepTick as previously discussedThere are existing review comments from both philippotto and daniel-wer suggesting to remove this code. If it's no longer needed, please remove it.
124-140
: Well-designed test assertion helpersThe
expectUnmerged
andexpectMerged
helper functions provide clear, reusable assertions that improve test readability and maintenance. The implementation ofexpectMerged
correctly applies the pending operations to verify the merged state.
142-231
: Comprehensive test plan coverageThe test plans cover a wide range of scenarios including different timing of compression/decompression operations and backend data availability. This thorough approach helps ensure the BucketSnapshot class behaves correctly under various conditions.
256-264
: Appropriate test setup for BucketSnapshotThe BucketSnapshot instantiation is well-structured with clear test data and mock dependencies. The use of a custom MockCompressor allows precise control over the asynchronous operations during testing.
281-295
: Good verification of data consistencyThe test properly verifies data consistency across multiple calls to
getDataForRestore()
, ensuring the BucketSnapshot correctly handles data merging and persistence.
297-299
: Thorough queue verificationThe verification that all compression and decompression operations have been properly handled (counters at 0) ensures no unexpected asynchronous operations are pending, which is crucial for reliable tests.
Co-authored-by: Daniel <[email protected]>
…snapshot.ts Co-authored-by: Daniel <[email protected]>
…ossos into refactor-volume-undo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
@daniel-wer See my latest commit. I had to undo the change with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts (1)
148-166
: 🛠️ Refactor suggestionStrengthen merge validation or document assumptions.
mergeDataWithBackendDataInPlace
only checks array lengths. Consider verifying additional invariants (e.g., matching channel count) to avoid subtle data corruption.if (originalData.length !== backendData.length) { throw new Error("Cannot merge data arrays with differing lengths"); +} +// Optionally, verify any additional shape constraints here: +// e.g., channel checks, dimension metadata checks
🧹 Nitpick comments (6)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts (1)
37-53
: Consider adding explicit error handling for compression failures.Overall, your constructor logic is clean. However, if
compressTypedArray
fails or rejects, it might leave the snapshot in an undefined state. Consider adding a catch or a fallback strategy for compression errors.constructor( ... ) { this.dataClone = dataClone; ... try { this.startCompression(); } catch (error) { + // Log or handle compression error + console.error("Compression failed", error); } }frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts (5)
104-104
: Clarify promise usage.
While thismaybeUnmergedBucketLoadedPromise
field is valid for deferring initialization, consider adding doc comments to explain how it’s resolved and reused.
177-183
: Naming consideration formayBeGarbageCollected
.
The logic is correct. However, consider renaming tocanBeGarbageCollected
for clearer semantics.
274-274
: Deprecated method usage.
Thelabel_DEPRECATED
method is clearly discouraged. Consider removing it entirely if no longer needed, or schedule it for removal.
738-744
:_logMaybe
debugging approach.
Currently hard-coded to log only if address is[19,30,16,0]
. Consider parameterizing or removing in production builds to avoid “magic” addresses.
745-749
:_debuggerMaybe
for debugging.
No issues, though you might gate this behind an environment check for production.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
CHANGELOG.unreleased.md
(1 hunks)frontend/javascripts/oxalis/api/api_latest.ts
(2 hunks)frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts
(17 hunks)frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts
(1 hunks)frontend/javascripts/oxalis/model/bucket_data_handling/data_cube.ts
(4 hunks)frontend/javascripts/oxalis/model/sagas/undo_saga.ts
(13 hunks)frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx
(1 hunks)frontend/javascripts/oxalis/store.ts
(1 hunks)frontend/javascripts/test/model/binary/cube.spec.ts
(4 hunks)frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts
(1 hunks)frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts
(1 hunks)frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration.spec.ts
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- frontend/javascripts/oxalis/model/sagas/volumetracing_saga.tsx
- frontend/javascripts/test/model/binary/layers/wkstore_adapter.spec.ts
- frontend/javascripts/oxalis/store.ts
- frontend/javascripts/oxalis/api/api_latest.ts
- frontend/javascripts/test/model/binary/cube.spec.ts
🧰 Additional context used
🧬 Code Definitions (2)
frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts (1)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts (2)
PendingOperation
(6-6)BucketSnapshot
(13-146)
frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration.spec.ts (3)
frontend/javascripts/test/helpers/apiHelpers.ts (1)
createBucketResponseFunction
(38-50)frontend/javascripts/oxalis/constants.ts (1)
Vector3
(13-13)frontend/javascripts/oxalis/model/actions/volumetracing_actions.ts (4)
setActiveCellAction
(194-206)startEditingAction
(161-166)addToLayerAction
(168-172)finishEditingAction
(189-192)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: circleci_build
🔇 Additional comments (40)
frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts (6)
1-12
: Imports and type definitions look good.Dependencies and type declarations are well-organized. No issues to report here.
13-21
: Class field definitions are straightforward.The
needsMergeWithBackendData
flag and the typed properties provide clear intent. This structure is easy to follow and promotes clarity.
78-86
: Local data retrieval logic is sound.Graceful fallback throws an error if both
dataClone
andcompressedData
are null. This ensures integrity of retrieved data.
92-103
: Backend data retrieval logic is appropriate.Similar checks for both uncompressed and compressed data keep the code safe from undefined references.
55-76
:❓ Verification inconclusive
Verify concurrency when accessing data during compression.
When
startCompression
runs asynchronously, callinggetLocalData()
in parallel might cause partial or inconsistent states. Ensure your calling code handles such scenarios (e.g., queueing or waiting).Could you confirm if any concurrency check or synchronization is done elsewhere by searching for all invocations of
getDataForRestore()
orgetLocalData()
calls?
🏁 Script executed:
#!/bin/bash # Checking concurrency usage of getLocalData or getDataForRestore rg -A 5 "getLocalData|getDataForRestore"Length of output: 5085
Concurrency Alert: Verify Synchronized Access During Compression
The test suite shows that calls to
getDataForRestore()
(and consequentlygetLocalData()
) are awaited, suggesting that restore operations are serialized during typical flows. However, the asynchronous behavior instartCompression
still poses a risk: ifgetLocalData()
is invoked concurrently while compression is in progress, it might return partial or inconsistent data.
- File:
frontend/javascripts/oxalis/model/bucket_data_handling/bucket_snapshot.ts
- Review the implementations of
startCompression()
,getLocalData()
, andgetDataForRestore()
.- Test Coverage: Although tests in
frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts
await these calls, there is no explicit synchronization (e.g., locking or queuing) in the production code.- Action: Please verify that any concurrent access is either explicitly guarded or that the usage contract enforces serialized access. Consider adding synchronization if concurrent calls might occur outside of the tested scenarios.
105-145
:❓ Verification inconclusive
Confirm deeper structure compatibility before merging.
getDataForRestore
merges arrays only if lengths match, but complex data shapes might still differ. Confirm that no deeper mismatch exists in your broader codebase.If you suspect deeper shape differences can occur, please run a script to find references to
BucketDataArray
usage and check assumptions:
🏁 Script executed:
#!/bin/bash # Searching for usage of BucketDataArray to confirm shape consistency rg -A 3 "BucketDataArray"Length of output: 14868
Ensure Typed Array Type Consistency in Merge Operation
The merge ingetDataForRestore
currently hinges on a length check before invoking the in-place merge. Our search confirms that theBucketDataArray
is consistently defined as a set of typed arrays (e.g.,Uint8Array
,Int8Array
, etc.) across the codebase. However, while length equality is verified, there’s no explicit check that the local and backend arrays share the same underlying constructor. In scenarios where, for example, aUint8Array
might inadvertently be merged with anInt8Array
(even if they are of the same length), this could lead to subtle bugs.
- Action: Consider adding an explicit runtime check to ensure that both arrays not only match in length but are also of the same type (i.e., have the same constructor). Alternatively, document the invariant clearly so that it’s maintained throughout future changes.
frontend/javascripts/test/sagas/volumetracing/volumetracing_saga_integration.spec.ts (2)
469-469
: No meaningful changes in this line.
663-715
: Test logic for brushing with undo and garbage collection is solid.This test thoroughly exercises brush usage, saving state, reloading buckets, undoing, and redoing. It helps confirm that garbage collection does not break the undo/redo chain.
frontend/javascripts/test/model/volumetracing/bucket_snapshot.spec.ts (6)
1-9
: Well-structured introduction with clear explanation of test goals.The introduction effectively explains the purpose of the test module and how it addresses the async nature of
BucketSnapshot
operations. The comment clearly articulates the approach of using a plan executor to test various async event orders.
25-74
: Good implementation of MockCompressor with controlled async behavior.The
MockCompressor
class provides excellent test control over async compression/decompression operations through queue management. This approach allows tests to deterministically verify howBucketSnapshot
behaves under different ordering scenarios.
88-94
: The executeStep method properly awaits operations sequentially.The implementation correctly awaits each action before proceeding to the next one, ensuring that operations are executed in the order specified by the test plan.
121-137
: Effective helper functions for assertions.The
expectUnmerged
andexpectMerged
functions provide clean abstractions for verifying expected outcomes, enhancing test readability.
139-228
: Comprehensive test plans covering various async scenarios.The plans effectively test different event ordering scenarios, such as:
- Slow compression with unavailable backend data
- Backend data available before compression
- Compression before backend data is available
- Various combinations of decompression timing
This thorough coverage will help ensure the robustness of the
BucketSnapshot
implementation.
230-297
: Well-structured test implementation.The test properly:
- Sets up test data and mocks
- Executes steps according to the plan
- Performs assertions at appropriate points
- Verifies that all queues are empty at the end
This structure ensures reliable testing of the
BucketSnapshot
behavior.frontend/javascripts/oxalis/model/bucket_data_handling/data_cube.ts (3)
327-333
: Improved garbage collection check with clear reasoning.The change from a direct check to
mayBeGarbageCollected(true)
adds clarity with the explicit comment about respecting the accessed flag. This prevents accidentally collecting buckets that were just used for rendering.
379-392
: Comprehensive documentation for collectBucketsIf behavior.The detailed comment explains the context in which this method is called and the guarantees that should be maintained by callers. This is valuable information for maintainers and helps prevent misuse of the function.
396-409
: Enhanced bucket collection safety with additional checks.Adding the
mayBeGarbageCollected()
check with a clear explanation improves the safety of the collection process. The comment also clarifies whyrespectAccessedFlag
is set tofalse
in this context, which is important for understanding the behavior.frontend/javascripts/oxalis/model/sagas/undo_saga.ts (7)
4-5
: Updated import structure for cleaner dependencies.The reorganized imports and addition of the BucketSnapshot import establish a clear dependency structure that aligns with the refactored implementation.
Also applies to: 44-45
56-61
: Simplified VolumeUndoState structure.The new structure directly incorporates BucketSnapshot objects, removing the complexity of the previous implementation that likely handled compression separately. This simplification aligns with the PR's goal of reducing code complexity.
154-154
: Consistent terminology update from buckets to snapshots.Variable names have been updated to use "snapshot" terminology consistently throughout the file, improving code readability and accurately reflecting the new design.
Also applies to: 170-170, 327-327
187-200
: Updated check for empty bucket snapshots.The renamed function
areCurrentBucketSnapshotsEmpty
maintains the same logical check but with updated terminology that matches the new implementation. This ensures consistency across the codebase.
322-327
: Simplified bucket handling in addBucketToUndoAction.The code now directly adds the provided bucketSnapshot to the collection, eliminating what was likely more complex code for managing bucket data and compression. This simplification should make the code more maintainable.
628-658
: Streamlined implementation of volume state restoration.The new implementation:
- Works directly with
BucketSnapshot
objects- Properly collects current state before applying changes
- Handles restoration in a clear, sequential manner
This approach should be more maintainable and less error-prone than the previous implementation that likely had more complex handling of compressed data.
659-685
: Proper segment and group handling during undo/redo.The segment and group update logic maintains the correct ordering (groups first, then segments) with explicit comments explaining why this order matters. The function returns a complete undo state that can be used to revert the operation, maintaining bidirectional undo/redo capability.
frontend/javascripts/oxalis/model/bucket_data_handling/bucket.ts (16)
16-16
: No issues with the new import.
19-19
: Clean addition of new helpers.
20-20
: ImportingBucketSnapshot
andPendingOperation
.
This integration looks consistent with the new snapshot-based design.
49-49
: Use ofreadonly
for theNullBucket
type.
Ensures strict type-safety and immutability. Good approach.
70-70
: Unifying the bucket type aliases underBucket
.
No concerns; this is a straightforward union type.
80-82
: Enforcing immutability.
Transitioningtype
,elementClass
, andzoomedAddress
toreadonly
is beneficial for clearer intent and preventing accidental mutation.
95-95
: RefiningpendingOperations
.
Changing to an array ofPendingOperation
makes the system more explicit and flexible for various operations.
97-97
: Introducingprivate accessed
:
Helps handle the garbage collection logic more reliably. No issues.
301-303
: Adding to undo stack.
Using a snapshot before mutation is a clean approach to preserve pre-mutation states.
304-309
:getSnapshotBeforeMutation
method.
Synchronous snapshot creation avoids blocking user input. Implementation aligns with the new undo system.
310-318
:getSnapshotBeforeRestore
method.
Ensures data is loaded before snapshot creation. This asynchronous approach is logical for correct state retrieval.
320-335
: Centralized snapshot creation ingetSnapshot
.
The checks forPREPARE_RESTORE_TO_SNAPSHOT
are robust, preventing unintentional calls when data is null.
348-354
: Constructing the newBucketSnapshot
.
All relevant data and pending operations are passed in, cleanly decoupling snapshot logic fromDataBucket
.
358-369
:restoreToSnapshot
flow.
Correctly merges restored data and updates pending operations. Implementation aligns well with the new snapshot model.
384-387
:setData()
method.
Replacing local data andpendingOperations
simultaneously ensures a clean transition. Concurrency looks manageable here.
450-450
: Triggering labeled event.
No concerns. The throttled approach is effective for performance.
BucketSnapshot
class which is responsible for compressing/decompressing local/remote bucket data. Using that abstraction, the undo saga could be slimmed down quite a bit (in code size as well as async complexities). Maybe this also fixed some bugs (I found the old logic very hard to understand). I also added some unit tests for the new abstraction.master
).URL of deployed dev instance (used for testing):
Steps to test:
Issues:
(Please delete unneeded items, merge only when none are left open)