Conversation
WalkthroughThe changes update the deployment and search configuration processes. In the GitHub Actions workflow, a new "Compress Search Index" step is added to compress the generated JSON file, and the cleanup step now removes additional files. The debug script similarly adds compression and expands its file cleanup. In the search module, a new asynchronous function is introduced to fetch and decompress a compressed file, while the service worker’s cache version is updated. Changes
Sequence Diagram(s)sequenceDiagram
participant Dev as Developer
participant GitHub as GitHub Actions
participant Build as Build Process
Dev->>GitHub: Push commit (any branch)
GitHub->>Build: Trigger build-and-deploy job
Build->>Build: Execute build steps
Build->>Build: "Compress Search Index" (gzip, compress searchindex.json)
Build->>Build: Cleanup (remove searchindex.js)
Build->>GitHub: Deploy site changes
sequenceDiagram
participant Client as Browser
participant Searcher as fetchAndDecompress()
participant Network as fetchRequest()
Client->>Searcher: Invoke initSearch()
Searcher->>Network: fetchRequest(url: searchindex.json.gz)
Network-->>Searcher: Return compressed data
Searcher->>Searcher: Decompress using DecompressionStream
Searcher->>Client: Return parsed JSON
Possibly related PRs
Poem
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (3)
🚧 Files skipped from review as they are similar to previous changes (3)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
js/searcher.ts (1)
223-224: Consider adding proper TypeScript type definitionThe lint ignore comment for explicit
anytype suggests TypeScript doesn't have proper type definitions for the 'brotli' format in DecompressionStream.Consider creating a custom type declaration for DecompressionStream that includes 'brotli' as a valid format to avoid using 'any':
- /* biome-ignore lint: no-explicit-any */ - const stream = response.body.pipeThrough(new DecompressionStream('brotli' as any)); + // Add to a .d.ts file: + // declare type CompressionFormat = 'gzip' | 'deflate' | 'deflate-raw' | 'brotli'; + // declare class DecompressionStream { + // constructor(format: CompressionFormat); + // } + const stream = response.body.pipeThrough(new DecompressionStream('brotli'));
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.github/workflows/gh-pages.yml(3 hunks)debug.sh(2 hunks)js/searcher.ts(2 hunks)js/serviceworker.ts(1 hunks)
🔇 Additional comments (8)
js/serviceworker.ts (1)
3-3: CACHE_VERSION update appropriately reflects major changesThe cache version has been bumped from v6.1.0 to v7.0.0, which is appropriate for a significant change in the caching strategy. This major version increment ensures that clients will fetch the new compressed search index file rather than using cached versions of the uncompressed file.
debug.sh (2)
54-54: Brotli compression matches the PR objectiveAdding Brotli compression for the search index after generation is consistent with the goal of reducing file size. This addition correctly compresses the file and creates a
.brextension file that will be used by the site.
66-67: Cleanup of original files is appropriateUncommented lines to remove both the original JavaScript and JSON files after compression is good practice to keep only the compressed version in the final build.
js/searcher.ts (2)
244-244: Search index file path correctly updatedUpdating the file path to fetch the compressed .br file matches the changes made in the build process. This change ensures the site uses the smaller compressed file, achieving the PR's primary objective.
216-228:Details
✅ Verification successful
New decompression implementation looks good
The implementation of
fetchAndDecompresscorrectly handles fetching and decompressing the Brotli-compressed search index file. The function properly:
- Uses the existing
fetchRequestfunction with error handling- Checks for null response body
- Decompresses the data using DecompressionStream
- Parses the result as JSON
🌐 Web query:
What browsers support DecompressionStream with brotli?💡 Result:
Brotli decompression via the
DecompressionStreamAPI is supported in most modern browsers as of 2025, though implementation details vary. Below is a breakdown of support across major browsers:Browser Support for
DecompressionStreamwith Brotli
- Chrome: Supported from version 50 onward[5].
- Edge: Supported from version 15 onward[5].
- Firefox: Supported from version 44 onward[5].
- Safari: Supported from version 11 onward (including iOS 11+)[5][1].
- Opera: Supported from version 38 onward[5].
- Android Browser: Supported from version 97 onward[1].
Key Considerations
- HTTPS Requirement: Brotli compression only works over HTTPS connections[2][7].
- Performance: Brotli typically achieves 18-26% better compression ratios than Gzip, especially at higher compression levels (e.g., level 11)[2][7].
- Fallback Handling: Browsers without Brotli support automatically fall back to Gzip or uncompressed content[2][5].
Limitations
- Some security/firewall solutions like Palo Alto Networks only added Brotli decompression support in November 2024[8].
- Older browsers (e.g., IE, Safari ≤10) and legacy mobile browsers lack support[1][5].
- Stream decompression in Web Workers requires WASM polyfills like
brotli-wasmfor full cross-browser compatibility[3][6].For developers, the recommended implementation pattern is:
const decompressionStream = new DecompressionStream('br'); const decompressedStream = response.body.pipeThrough(decompressionStream);This works natively in supported browsers, while unsupported environments can use the
brotli-wasmlibrary[3].Citations:
- 1: https://www.lambdatest.com/web-technologies/brotli
- 2: https://www.keycdn.com/blog/keycdn-brotli-support
- 3: https://github.com/httptoolkit/brotli-wasm
- 4: https://devblogs.microsoft.com/dotnet/introducing-support-for-brotli-compression/
- 5: https://caniuse.com/brotli
- 6: https://gpuopen.com/brotli-g-sdk-announce/
- 7: https://blog.cloudflare.com/this-is-brotli-from-origin/
- 8: https://docs.paloaltonetworks.com/content/techdocs/en_US/whats-new/new-features/november-2024/support-for-brotli-decompression
- 9: whatwg/compression#34
Decompression Implementation Verified: Browser Support Confirmed
The updated
fetchAndDecompressfunction injs/searcher.ts(lines 216–228) correctly handles fetching, error checking, decompression via theDecompressionStreamAPI, and JSON parsing of the Brotli-compressed search index file. Our verification confirms that browsers including Chrome (v50+), Edge (v15+), Firefox (v44+), Safari (v11+), Opera (v38+), and Android Browser (v97+) support stream decompression with Brotli as of 2025.
- The function properly checks for a null response body.
- It decompresses the fetched data and decodes it into JSON.
- The explicit cast (
'brotli' as any) bypasses type-checking; consider using the standard identifier ('br') if possible for clarity and conformity with common usage.Overall, the implementation meets the requirements.
.github/workflows/gh-pages.yml (3)
277-277: Verify if branch restriction removal is intentionalCommenting out the branch filter for the build-and-deploy job means it will run on any branch or PR, not just main. This could lead to deployment attempts from non-main branches.
Is this intentional? Typically, GitHub Pages deployments are restricted to specific branches. Check if this change is meant for testing during development and should be restored before merging.
344-349: Brotli integration looks goodThe new step to install and use Brotli for compressing the search index in the CI/CD pipeline is well-implemented. Using Homebrew to install Brotli is appropriate for the macOS runner.
361-362: Appropriate cleanup of original filesThe removal of both the original JavaScript and JSON search index files is appropriate since they're no longer needed after compression, keeping the deployment clean and minimizing unnecessary files.
|
I don't know if it's “I'm glad I noticed it before you did” or “You should have noticed it before you did PR”, I'll have to think about what to do...😓 |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
js/searcher.ts (2)
223-224: Improve type safety by avoidingas anycastThe linter ignore comment suggests a problem with type definitions. TypeScript may not recognize 'gzip' as a valid format string for DecompressionStream.
Consider creating a proper type definition or using a more type-safe approach:
- /* biome-ignore lint: no-explicit-any */ - const stream = response.body.pipeThrough(new DecompressionStream('gzip' as any)); + // Define acceptable compression formats + type CompressionFormat = 'gzip' | 'deflate' | 'br'; + const format: CompressionFormat = 'gzip'; + const stream = response.body.pipeThrough(new DecompressionStream(format));
226-227: Optimize TextDecoder usageA new TextDecoder instance is created for each function call, which is inefficient for repeated usage.
Consider creating a reusable TextDecoder instance:
+// Create a reusable TextDecoder instance +const decoder = new TextDecoder(); + const fetchAndDecompress = async (url: string) => { const response = await fetchRequest(url); if (!response.body) { throw new Error('Response body is null'); } /* biome-ignore lint: no-explicit-any */ const stream = response.body.pipeThrough(new DecompressionStream('gzip' as any)); const decompressed = await new Response(stream).arrayBuffer(); - return JSON.parse(new TextDecoder().decode(decompressed)); + return JSON.parse(decoder.decode(decompressed)); };
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
js/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (3)
.github/workflows/gh-pages.yml(3 hunks)debug.sh(2 hunks)js/searcher.ts(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- debug.sh
- .github/workflows/gh-pages.yml
🔇 Additional comments (1)
js/searcher.ts (1)
244-244:Details
✅ Verification successful
Update file extension to match compression algorithm
The file extension should match the compression algorithm being used.
Verify that the build process is correctly generating a file with the appropriate extension. If using gzip, the file should be
searchindex.json.gz; if using Brotli, it should besearchindex.json.br.
🏁 Script executed:
#!/bin/bash # Check if the correct compressed file exists and what compression is being used # Look for compressed search index files echo "Compressed search index files:" fd "searchindex.json" --extension gz fd "searchindex.json" --extension br # Examine file type of compressed files if they exist echo "File type information:" fd "searchindex.json.gz" --exec file {} \; 2>/dev/null || echo "No .gz file found" fd "searchindex.json.br" --exec file {} \; 2>/dev/null || echo "No .br file found"Length of output: 291
File extension matches the current compression setup
The verification indicates that only a gzip-compressed file (using the
.gzextension) is present. Since no Brotli-compressed (.br) version was found, the current reference tosearchindex.json.gzis appropriate. If the compression algorithm changes in the future, please update the file extension accordingly.
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/gh-pages.yml(3 hunks)
🔇 Additional comments (1)
.github/workflows/gh-pages.yml (1)
277-277: Clarify Branch Restriction RemovalThe branch filter condition on line 277 has been commented out. Please confirm that removing the restriction (i.e. allowing the build-and-deploy job to run from every branch instead of only
main) is intentional. This could potentially lead to deployments from unintended branches if not monitored carefully.
|
Makes me a little nervous, but I guess it's ok... Let's merge!! |
This PR prepares a compressed search index file using
gzipand also modifies the site to use it.This dramatically reduces the index file size!!
2265296 -> 404117 (One digit off...! 🤣)
There is a possibility that the use of
DecompressionStreammay limit the browsers that can use the search function,but it is not a problem as long as the browsers are used after 2023 May.
refs: https://developer.mozilla.org/ja/docs/Web/API/DecompressionStream
Summary by CodeRabbit
Summary by CodeRabbit
New Features
Chores