Skip to content

fix(sdk): bound browser decrypt memory for large chunker downloads#922

Merged
bnrobinson93 merged 7 commits intomainfrom
feat/919-chunker-memory-management
Apr 23, 2026
Merged

fix(sdk): bound browser decrypt memory for large chunker downloads#922
bnrobinson93 merged 7 commits intomainfrom
feat/919-chunker-memory-management

Conversation

@bnrobinson93
Copy link
Copy Markdown
Contributor

@bnrobinson93 bnrobinson93 commented Apr 22, 2026

Summary

Fix large browser decrypts from chunker sources failing near EOF due to memory pressure in @opentdf/sdk.

Main bug upstream: consumed decrypted segments stayed referenced in the decrypt queue after enqueue, so long- running browser decrypts retained plaintext history and eventually killed the tab. This change releases consumed plaintext promptly and also removes a few avoidable decrypt-path copies that were amplifying heap pressure.

Validated with a browser-local harness using local @opentdf/sdk:

  • encrypt 8 GiB file: succeeds
  • decrypt 8 GiB file: previously failed around ~7.979 GiB
  • decrypt 8 GiB file after fix: completes successfully

Root Cause

Browser decrypt path for chunker sources was not truly bounded in resident memory.

Most important issue:

  • resolved decryptedChunk promises remained stored in chunks[] after downstream consumption
  • each consumed plaintext segment therefore stayed strongly referenced
  • memory usage grew across the entire download until the browser tab failed

Additional pressure came from extra buffer copies in the AES-GCM/browser decrypt path.

What Changed

Core fix

  • release consumed plaintext after controller.enqueue(...) by replacing the consumed chunk slot with a fresh mailbox while preserving metadata

Memory / copy reductions

  • avoid payload.asByteArray() on enqueue path
  • reduce redundant Uint8Array / BufferSource copying in browser crypto
  • avoid unnecessary AES-GCM payload/tag churn in browser-native decrypt path

Validation

Reproduced in standalone browser harness outside downstream app/authz/transport path.

Observed before:

  • browser-local 8 GiB decrypt failed near end
  • errors included RangeError: Array buffer allocation failed

Observed after:

  • decrypt completes to EOF
  • harness reports:
    • [done] decrypt-local complete plaintext=8.000 GiB

Risk / Notes

  • Main functional behavior should be unchanged aside from memory retention/copy behavior during decrypt.
  • Debug logging is still present in this branch; may want follow-up cleanup or downgrade before merge.
  • Downstream app changes are not required for correctness if upstream fix lands, though callers may still choose explicit segmentBatchSize / maxConcurrentSegmentBatches tuning for browser perf policy.

How To Test

  1. Build local lib
  2. Install/relink local @opentdf/sdk into browser harness or downstream app
  3. Decrypt a large local encrypted file from a chunker source
  4. Confirm decrypt reaches EOF without tab crash / allocation failure

Summary by CodeRabbit

  • Bug Fixes

    • Improved large file handling with enhanced memory management for better stability during encryption and decryption operations
    • Extended decryption compatibility to accept multiple buffer input formats
  • Tests

    • Added comprehensive unit tests for AES-256-GCM cryptographic operations with various input types
  • Chores

    • Optimized test infrastructure and updated browser configuration for large file testing support

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 22, 2026

Warning

Rate limit exceeded

@bnrobinson93 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 50 minutes and 9 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 50 minutes and 9 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f4ffa50c-4e43-4787-9be5-96f82d91b7f2

📥 Commits

Reviewing files that changed from the base of the PR and between 1d03e62 and bfb9692.

📒 Files selected for processing (2)
  • web-app/tests/tests/acts.ts
  • web-app/tests/tests/huge.spec.ts
📝 Walkthrough

Walkthrough

This PR refactors the AES-GCM decryption pipeline to accept both ArrayBuffer and Uint8Array inputs, introduces a new decryptBufferSource export, and updates related type signatures across the crypto and cipher layers. It also adds GCM unit tests and refactors test infrastructure for large file handling.

Changes

Cohort / File(s) Summary
Crypto Decryption Pipeline
lib/tdf3/src/ciphers/aes-gcm-cipher.ts, lib/tdf3/src/crypto/core/symmetric.ts, lib/tdf3/src/ciphers/symmetric-cipher-base.ts
Updated AesGcmCipher.decrypt and abstract SymmetricCipher.decrypt to accept `ArrayBuffer
Crypto Model Updates
lib/tdf3/src/models/encryption-information.ts
Updated SplitKey.decrypt signature to accept `ArrayBuffer
Chunk/Stream Handling
lib/tdf3/src/tdf.ts
Refactored slice indexing to use cached variables, updated decryptStreamFrom to enqueue plaintext via single Uint8Array, and added logic to clear decrypted chunks from mailboxes after consumption while preserving metadata.
Zip Reader
lib/tdf3/src/utils/zip-reader.ts
Removed unnecessary await in getPayloadSegment return statement.
Cryptography Unit Tests
lib/tests/mocha/unit/crypto/crypto-service.spec.ts
Added two new AES-256-GCM encrypt→decrypt round-trip tests exercising both Uint8Array and raw ArrayBuffer input shapes.
Web App Test Infrastructure
web-app/tests/tests/huge.spec.ts, web-app/tests/playwright.config.ts, web-app/tests/tests/acts.ts
Reorganized Large File test under describe block with chromium-only conditional skip, refactored cleanup logic with nested finally blocks, exported shared appUrl constant, and added conditional memory heap size override for huge test set.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant AesGcmCipher
    participant BrowserNativeCryptoService
    participant DefaultCryptoService
    participant SubtleCrypto

    Caller->>AesGcmCipher: decrypt(ArrayBuffer | Uint8Array, key, iv)
    AesGcmCipher->>AesGcmCipher: normalize input to Uint8Array
    AesGcmCipher->>AesGcmCipher: processGcmPayload (IV + ciphertext+tag)
    
    alt cryptoService.name == BrowserNativeCryptoService
        AesGcmCipher->>BrowserNativeCryptoService: decryptBufferSource(subarray(12), key, subarray(0,12))
        BrowserNativeCryptoService->>SubtleCrypto: decrypt(params, key, ciphertext+tag)
        SubtleCrypto-->>BrowserNativeCryptoService: plaintext
        BrowserNativeCryptoService-->>AesGcmCipher: DecryptResult
    else other CryptoService
        AesGcmCipher->>DefaultCryptoService: decrypt(payload, key, payloadIv, AES_256_GCM)
        DefaultCryptoService->>SubtleCrypto: decrypt(params, key, ciphertext+tag)
        SubtleCrypto-->>DefaultCryptoService: plaintext
        DefaultCryptoService-->>AesGcmCipher: DecryptResult
    end
    
    AesGcmCipher-->>Caller: DecryptResult
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • eugenioenko

🐰 A cipher's metamorphosis complete,
BufferSource and Uint8Array now meet,
Auth tags stay bundled in GCM's embrace,
While chunks are cleared at their finish's place!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 27.27% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix(sdk): bound browser decrypt memory for large chunker downloads' clearly describes the main change: fixing memory management in browser decrypt for large downloads, which aligns with the PR's core objective of resolving a memory-retention bug in the chunker decrypt path.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/919-chunker-memory-management

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions

This comment was marked as resolved.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request optimizes AES-GCM decryption by supporting Uint8Array and BufferSource directly, reducing buffer copies, and implements memory management for streaming decryption by clearing consumed segments. It also introduces comprehensive debug logging for the decryption process. Feedback identifies a redundant IV assignment in the symmetric decryption logic and recommends removing the diagnostic logging and associated array mappings to prevent performance overhead in production.

Comment thread lib/tdf3/src/crypto/core/symmetric.ts Outdated
Comment thread lib/tdf3/src/tdf.ts Outdated
Comment thread lib/tdf3/src/tdf.ts Outdated
@bnrobinson93 bnrobinson93 force-pushed the feat/919-chunker-memory-management branch from 016f46f to 92696f7 Compare April 22, 2026 19:24
@github-actions

This comment was marked as resolved.

@bnrobinson93 bnrobinson93 force-pushed the feat/919-chunker-memory-management branch from 8f7174c to 1d6e356 Compare April 22, 2026 19:35
@github-actions
Copy link
Copy Markdown

X-Test Failure Report

✅ go-v0.9.0
opentdf-ctl
opentdf-sdk-lib

@github-actions
Copy link
Copy Markdown

@bnrobinson93 bnrobinson93 marked this pull request as ready for review April 23, 2026 17:32
@bnrobinson93 bnrobinson93 requested a review from a team as a code owner April 23, 2026 17:32
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
lib/tdf3/src/tdf.ts (1)

1271-1301: ⚠️ Potential issue | 🟠 Major

Use subtraction instead of modulo for slice-relative offsets.

Line 1276 computes the in-buffer offset with % slice[0].encryptedOffset, which can point to the wrong bytes when encrypted segment sizes vary. The fetched buffer starts at the first chunk’s encrypted offset, so each segment offset should be encryptedOffset - firstChunk.encryptedOffset.

🐛 Proposed fix
 export async function sliceAndDecrypt({
   buffer,
   reconstructedKey,
   slice,
@@
   specVersion: string;
 }) {
-  for (const index in slice) {
-    const chunk = slice[index];
+  const firstChunk = slice[0];
+  if (!firstChunk) {
+    return;
+  }
+
+  for (const chunk of slice) {
     const { encryptedOffset, encryptedSegmentSize, plainSegmentSize } = chunk;
 
-    const offset =
-      slice[0].encryptedOffset === 0 ? encryptedOffset : encryptedOffset % slice[0].encryptedOffset;
+    const offset = encryptedOffset - firstChunk.encryptedOffset;
     const encryptedChunk = buffer.subarray(offset, offset + (encryptedSegmentSize as number));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/tdf3/src/tdf.ts` around lines 1271 - 1301, The offset calculation for
reading each chunk from the buffer is wrong: replace the modulo-based
computation with a slice-relative subtraction so each chunk's offset is computed
as encryptedOffset - slice[0].encryptedOffset before slicing the buffer; update
the code around the loop that computes offset (referencing slice,
encryptedOffset, slice[0].encryptedOffset and where encryptedChunk is assigned)
to use subtraction instead of `%`, ensuring the encryptedChunk subarray
start/end use that new offset when passed to decryptChunk and subsequent size
checks/rejects.
🧹 Nitpick comments (1)
lib/tests/mocha/unit/crypto/crypto-service.spec.ts (1)

248-263: Add coverage for the browser-native decrypt branch.

This test uses DefaultCryptoService, so it primarily covers the processGcmPayload fallback path. Please add a case for the BrowserNativeCryptoService path in AesGcmCipher.decrypt, ideally with a non-zero byteOffset Uint8Array view to lock in the subarray handling this PR depends on.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/tests/mocha/unit/crypto/crypto-service.spec.ts` around lines 248 - 263,
Add a new unit test that exercises the BrowserNativeCryptoService branch in
AesGcmCipher.decrypt by instantiating AesGcmCipher with
BrowserNativeCryptoService (not DefaultCryptoService), encrypting a payload,
then decrypting using a Uint8Array view that has a non-zero byteOffset (e.g., a
subarray or a larger buffer slice) to ensure the BrowserNativeCryptoService path
and subarray handling are tested; reuse importSymmetricKey, Binary and the
existing encrypt call to produce ciphertext and assert
decrypted.payload.asString() equals the original string.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@lib/tdf3/src/ciphers/aes-gcm-cipher.ts`:
- Around line 64-82: The decrypt method of AesGcmCipher currently types its
first parameter as Uint8Array which breaks when callers pass an ArrayBuffer;
change the decrypt signature to accept ArrayBuffer | Uint8Array and normalize
the input to a Uint8Array at the top of AesGcmCipher.decrypt (e.g. replace
usages of buffer with a local const like input = new Uint8Array(buffer) or
similar) before calling buffer.subarray(), decryptBufferSource,
processGcmPayload, or this.cryptoService.decrypt so all downstream calls work
regardless of the original binary type.

---

Outside diff comments:
In `@lib/tdf3/src/tdf.ts`:
- Around line 1271-1301: The offset calculation for reading each chunk from the
buffer is wrong: replace the modulo-based computation with a slice-relative
subtraction so each chunk's offset is computed as encryptedOffset -
slice[0].encryptedOffset before slicing the buffer; update the code around the
loop that computes offset (referencing slice, encryptedOffset,
slice[0].encryptedOffset and where encryptedChunk is assigned) to use
subtraction instead of `%`, ensuring the encryptedChunk subarray start/end use
that new offset when passed to decryptChunk and subsequent size checks/rejects.

---

Nitpick comments:
In `@lib/tests/mocha/unit/crypto/crypto-service.spec.ts`:
- Around line 248-263: Add a new unit test that exercises the
BrowserNativeCryptoService branch in AesGcmCipher.decrypt by instantiating
AesGcmCipher with BrowserNativeCryptoService (not DefaultCryptoService),
encrypting a payload, then decrypting using a Uint8Array view that has a
non-zero byteOffset (e.g., a subarray or a larger buffer slice) to ensure the
BrowserNativeCryptoService path and subarray handling are tested; reuse
importSymmetricKey, Binary and the existing encrypt call to produce ciphertext
and assert decrypted.payload.asString() equals the original string.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 5df44342-46c1-436a-ab29-e0f0aede5e39

📥 Commits

Reviewing files that changed from the base of the PR and between 7c02e1f and b424f3c.

📒 Files selected for processing (5)
  • lib/tdf3/src/ciphers/aes-gcm-cipher.ts
  • lib/tdf3/src/crypto/core/symmetric.ts
  • lib/tdf3/src/tdf.ts
  • lib/tdf3/src/utils/zip-reader.ts
  • lib/tests/mocha/unit/crypto/crypto-service.spec.ts

Comment thread lib/tdf3/src/ciphers/aes-gcm-cipher.ts
Copy link
Copy Markdown
Contributor

@eugenioenko eugenioenko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a blocker, but a suggestion.

The existing huge.spec.ts test always passes because the CI runner has enough RAM to hold 3 GB of leaked plaintext without the browser crashing. It tests correctness (output size) but not the memory invariant.

A simple way to make it catch this class of bug: cap the browser's heap size via Playwright launch args. If you limit V8's old-space to 512 MB, then 3 GB of retained plaintext will OOM but a properly streaming decrypt that releases segments will fit fine.

test.use({
launchOptions: {
args: ['--js-flags=--max-old-space-size=512'],
},
});

This directly enforces the invariant that peak heap scales with segment size and concurrency, not total file size. Would have caught the original bug without needing a bigger file or longer CI time.

I've created a PR in main with that update, without the fix we're expecting it to fail
#925

@github-actions
Copy link
Copy Markdown

@github-actions
Copy link
Copy Markdown

X-Test Failure Report

opentdf-ctl
opentdf-sdk-lib

@github-actions
Copy link
Copy Markdown

X-Test Failure Report

✅ go-v0.9.0
opentdf-ctl
opentdf-sdk-lib

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.github/workflows/reusable_build-and-test.yaml (1)

285-293: ⚠️ Potential issue | 🟡 Minor

Verify ci aggregation semantics with the added platform-huge dependency.

The success gate contains(needs.*.result, 'success') (line 300) passes if any need succeeded, and the failure gate contains(needs.*.result, 'failure') (line 305) only trips on failure — not cancelled or skipped. With platform-huge added, if it is ever skipped (e.g., via a future if: gate) or cancelled due to timeout, ci may still report success as long as at least one other job passed. This isn't new to this PR, but adding a long-running, potentially-cancelled job makes the existing logic more fragile. Consider tightening to something like !contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') and requiring all needed jobs to have succeeded.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/reusable_build-and-test.yaml around lines 285 - 293, The
ci job's current if-guards use contains(needs.*.result, 'success') and only
check for 'failure', which lets ci succeed even if some needs were cancelled or
skipped (especially now with platform-huge). Update the ci job's if conditions
(the expressions referencing contains(needs.*.result, ...)) to require all
needed jobs be successful by ensuring there are no non-success results; for
example replace the success gate with a predicate that asserts there are no
'failure', 'cancelled' or 'skipped' statuses (e.g.
!contains(needs.*.result,'failure') && !contains(needs.*.result,'cancelled') &&
!contains(needs.*.result,'skipped')) or otherwise implement an “all succeeded”
check so ci only runs when every need is success.
🧹 Nitpick comments (1)
.github/workflows/reusable_build-and-test.yaml (1)

223-273: Consider deduplicating platform-huge with platform-roundtrip via a matrix.

platform-huge is an almost verbatim copy of platform-roundtrip (lines 171-221), differing only in timeout-minutes (45 vs 90) and PLAYWRIGHT_TESTS_TO_RUN (roundtrip vs huge). Any future change to the setup (checkout refs, Go/Node versions, compose steps) will have to be made in two places and will drift. Collapsing the two into a single job with a strategy.matrix over { name, tests, timeout } would remove ~50 lines and eliminate the drift risk.

♻️ Sketch of a matrix-based consolidation
  platform-roundtrip:
    needs: [cli, lib, web-app]
    runs-on: ubuntu-22.04
    strategy:
      fail-fast: false
      matrix:
        include:
          - name: roundtrip
            tests: roundtrip
            timeout: 45
          - name: huge
            tests: huge
            timeout: 90
    name: platform-${{ matrix.name }}
    timeout-minutes: ${{ matrix.timeout }}
    defaults:
      run:
        working-directory: .github/workflows/roundtrip
    steps:
      # ... existing steps unchanged ...
      - env:
          PLAYWRIGHT_TESTS_TO_RUN: ${{ matrix.tests }}
        run: ./wait-and-test.sh platform

Then in the ci job needs: list, a single platform-roundtrip entry covers both matrix legs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/reusable_build-and-test.yaml around lines 223 - 273, The
two nearly identical GitHub Actions jobs platform-huge and platform-roundtrip
should be consolidated into one job using a strategy.matrix to avoid
duplication; edit the workflow to replace the separate platform-huge job with a
single job (e.g., platform-roundtrip) that declares strategy.matrix include
entries for the two variants (name/tests/timeout), set name to include
matrix.name, set timeout-minutes to use matrix.timeout, and replace the
hard-coded PLAYWRIGHT_TESTS_TO_RUN env with matrix.tests so the existing steps
(checkout, setup-node, setup-go, docker compose, ./wait-and-test.sh platform)
are reused for both legs without duplicating the steps.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/reusable_build-and-test.yaml:
- Around line 223-232: The platform-huge job currently runs on every workflow
and blocks the ci job; update the platform-huge job (job name "platform-huge")
to gate it with an if condition (e.g., only run on push, scheduled/nightly,
workflow_dispatch, or when a PR has a specific label like "run-huge") so it does
not execute on every PR, and then remove "platform-huge" from the ci job's needs
list (or change the ci job success logic) so skipped runs no longer block
merges; ensure the gating condition is added to the platform-huge job and
references the same job name so downstream dependencies remain correct.

In `@web-app/tests/tests/huge.spec.ts`:
- Around line 53-56: The test awaits page.waitForEvent('download') too early:
remove the leading await so that plainDownloadPromise is the unresolved promise
returned by page.waitForEvent('download', { timeout: 60000 }) before triggering
the download; then perform page.locator('#fileSink').click() and
page.locator('#decryptButton').click(), and finally await plainDownloadPromise
to capture the decrypt-triggered download. Locate the occurrences of
plainDownloadPromise, page.waitForEvent, and the '#decryptButton'/'#fileSink'
locators in huge.spec.ts and change the assignment to store the promise (no
await) prior to clicking.
- Around line 22-24: The test references appUrl in the Large File test which is
undefined and causes page.goto(`${appUrl}?...`) to navigate to an invalid URL;
fix by either (A) defining a constant appUrl (e.g., const appUrl =
'http://localhost:65432') at the top of the test file where authorize and
test('Large File') are declared, (B) export appUrl from acts.ts and import it
into this spec, or (C) avoid appUrl entirely and call page.goto with a relative
path leveraging Playwright's baseURL; update the reference used in page.goto to
use the chosen valid appUrl or relative path.

---

Outside diff comments:
In @.github/workflows/reusable_build-and-test.yaml:
- Around line 285-293: The ci job's current if-guards use
contains(needs.*.result, 'success') and only check for 'failure', which lets ci
succeed even if some needs were cancelled or skipped (especially now with
platform-huge). Update the ci job's if conditions (the expressions referencing
contains(needs.*.result, ...)) to require all needed jobs be successful by
ensuring there are no non-success results; for example replace the success gate
with a predicate that asserts there are no 'failure', 'cancelled' or 'skipped'
statuses (e.g. !contains(needs.*.result,'failure') &&
!contains(needs.*.result,'cancelled') && !contains(needs.*.result,'skipped')) or
otherwise implement an “all succeeded” check so ci only runs when every need is
success.

---

Nitpick comments:
In @.github/workflows/reusable_build-and-test.yaml:
- Around line 223-273: The two nearly identical GitHub Actions jobs
platform-huge and platform-roundtrip should be consolidated into one job using a
strategy.matrix to avoid duplication; edit the workflow to replace the separate
platform-huge job with a single job (e.g., platform-roundtrip) that declares
strategy.matrix include entries for the two variants (name/tests/timeout), set
name to include matrix.name, set timeout-minutes to use matrix.timeout, and
replace the hard-coded PLAYWRIGHT_TESTS_TO_RUN env with matrix.tests so the
existing steps (checkout, setup-node, setup-go, docker compose,
./wait-and-test.sh platform) are reused for both legs without duplicating the
steps.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0a76d048-faef-4e3b-800b-18377a8b8663

📥 Commits

Reviewing files that changed from the base of the PR and between b424f3c and e3f67f7.

📒 Files selected for processing (7)
  • .github/workflows/build-and-test.yaml
  • .github/workflows/reusable_build-and-test.yaml
  • lib/tdf3/src/ciphers/aes-gcm-cipher.ts
  • lib/tdf3/src/ciphers/symmetric-cipher-base.ts
  • lib/tdf3/src/models/encryption-information.ts
  • lib/tests/mocha/unit/crypto/crypto-service.spec.ts
  • web-app/tests/tests/huge.spec.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • lib/tests/mocha/unit/crypto/crypto-service.spec.ts

Comment thread .github/workflows/reusable_build-and-test.yaml Outdated
Comment thread web-app/tests/tests/huge.spec.ts Outdated
Comment thread web-app/tests/tests/huge.spec.ts Outdated
@github-actions
Copy link
Copy Markdown

@github-actions
Copy link
Copy Markdown

eugenioenko
eugenioenko previously approved these changes Apr 23, 2026
Copy link
Copy Markdown
Contributor

@eugenioenko eugenioenko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code changes loo great

@github-actions
Copy link
Copy Markdown

X-Test Failure Report

opentdf-ctl
opentdf-sdk-lib

@github-actions
Copy link
Copy Markdown

X-Test Failure Report

opentdf-ctl
opentdf-sdk-lib

@bnrobinson93 bnrobinson93 force-pushed the feat/919-chunker-memory-management branch from 1d03e62 to bfb9692 Compare April 23, 2026 19:57
@github-actions
Copy link
Copy Markdown

X-Test Failure Report

opentdf-ctl
opentdf-sdk-lib

@github-actions
Copy link
Copy Markdown

@bnrobinson93 bnrobinson93 merged commit 66aad4e into main Apr 23, 2026
23 checks passed
@bnrobinson93 bnrobinson93 deleted the feat/919-chunker-memory-management branch April 23, 2026 20:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants