-
Notifications
You must be signed in to change notification settings - Fork 12
fix: fix quick start link and examples #100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughUpdated documentation links and added Sphinx API docs and build integration; removed the Changes
Sequence Diagram(s)sequenceDiagram
participant CI as CI / Local dev
participant Prebuild as web/scripts/prebuild.mjs
participant Git as GitHub/HF repo
participant Python as Local Python env
participant Sphinx as Sphinx build
participant Web as web/public/docs
Note over CI,Prebuild: Prebuild invoked during web build
CI->>Prebuild: start prebuild
Prebuild->>Git: ensure dataset repo (clone or skip)
alt dataset missing
Git-->>Prebuild: cloned repo
Prebuild->>Git: git reset --hard <ref>
else dataset exists
Git-->>Prebuild: skip update
end
Prebuild->>Python: install docs deps into .sphinx-deps
Python-->>Prebuild: deps installed
Prebuild->>Sphinx: build docs (sphinx-build dirhtml)
Sphinx-->>Prebuild: generated HTML files
Prebuild->>Prebuild: inject base href into HTML
Prebuild->>Web: copy built docs -> public/docs/api/python
Prebuild-->>CI: exit (success/failure)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Areas requiring extra attention:
Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR fixes a broken documentation link and removes an unused example file from the repository.
Key Changes:
- Corrects the URL path from
quick_starttoquickstartin the README - Removes the
win_at_p.pyexample file
Reviewed Changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| README.md | Updates the quick start guide link to use the correct URL path |
| examples/win_at_p.py | Removes the entire win_at_p.py example script |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Summary of ChangesHello @YiyanZhai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request aims to enhance the user experience and maintainability of the project. It resolves an issue with an incorrect quick start guide link in the main documentation, making it easier for new users to get started. Additionally, it cleans up the examples directory by removing an unnecessary Python script, contributing to a more organized codebase. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates a link in README.md and removes the examples/win_at_p.py script. I've provided feedback on the README.md change, as the updated link to the quick start guide appears inconsistent with the documentation's file name (quick_start.mdx), potentially causing a broken link. A code suggestion is provided to align them. The removal of the win_at_p.py example is a significant change; it would be beneficial for future reference to add more context in the pull request description regarding this deletion.
| ## Get Started | ||
|
|
||
| This [guide](https://bench.flashinfer.ai/docs/start/quick_start) shows you how to use FlashInfer-Bench python module with the FlashInfer-Trace dataset. | ||
| This [guide](https://bench.flashinfer.ai/docs/start/quickstart) shows you how to use FlashInfer-Bench python module with the FlashInfer-Trace dataset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The link to the quick start guide has been changed to use quickstart, but the corresponding documentation file provided in the context is named quick_start.mdx. This inconsistency might lead to a broken link. If the URL path is derived from the filename, it should probably keep the underscore to match.
| This [guide](https://bench.flashinfer.ai/docs/start/quickstart) shows you how to use FlashInfer-Bench python module with the FlashInfer-Trace dataset. | |
| This [guide](https://bench.flashinfer.ai/docs/start/quick_start) shows you how to use FlashInfer-Bench python module with the FlashInfer-Trace dataset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
docs/index.mdx(1 hunks)tests/integration/flashinfer/test_rmsnorm.py(0 hunks)
💤 Files with no reviewable changes (1)
- tests/integration/flashinfer/test_rmsnorm.py
| ## Next Steps | ||
|
|
||
| - Get Started: ./start/quick_start | ||
| - Get Started: ./start/quickstart |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify the updated Get Started path exists and is accessible.
The path was updated from ./start/quick_start to ./start/quickstart. Confirm that the documentation file exists at the new path and that the link renders correctly in the deployed documentation.
Run the following script to check for any remaining references to the old path in the codebase:
🏁 Script executed:
#!/bin/bash
# Description: Search for references to the old "quick_start" path in documentation and code.
# Search for old path pattern
rg -n 'quick_start|quick-start' --type md --type mdx --type ts --type js --type jsonLength of output: 27
🏁 Script executed:
#!/bin/bash
# Description: Search for references to old "quick_start" path in docs and code (corrected).
# Search for old path pattern - omit mdx type since it's not recognized
echo "=== Searching for old 'quick_start' references ==="
rg -n 'quick_start' --type md --type ts --type js --type json 2>/dev/null || echo "No matches found"
echo ""
echo "=== Searching for old 'flashinfer_trace' path references ==="
rg -n 'flashinfer_trace' --type md --type ts --type js --type json 2>/dev/null || echo "No matches found"
echo ""
echo "=== Checking if new quickstart documentation file exists ==="
fd -t f 'quickstart' docs/ 2>/dev/null || echo "No matching files found"
echo ""
echo "=== Checking if new flashinfer-trace file exists ==="
fd -t f 'flashinfer-trace' docs/ 2>/dev/null || echo "No matching files found"
echo ""
echo "=== Listing docs directory structure ==="
fd -t f . docs/ | head -20Length of output: 445
🏁 Script executed:
#!/bin/bash
# Description: Explore repository structure to locate documentation and verify paths.
echo "=== Repository root contents ==="
ls -la | head -20
echo ""
echo "=== Search for all mdx files ==="
find . -name "*.mdx" -type f 2>/dev/null | head -20
echo ""
echo "=== Search for index documentation file ==="
find . -name "index.*" -path "*/docs/*" -type f 2>/dev/null | head -20
echo ""
echo "=== Search for 'start' directory ==="
find . -type d -name "start" 2>/dev/null
echo ""
echo "=== Search for quickstart or quick-start references ==="
find . -name "*quickstart*" -o -name "*quick-start*" 2>/dev/null | head -20
echo ""
echo "=== Search for all documentation-related files ==="
find . -path "*/docs/*" -type f 2>/dev/null | head -30Length of output: 2373
🏁 Script executed:
#!/bin/bash
# Description: Inspect the actual content of docs/index.mdx to verify the path changes.
echo "=== Content of docs/index.mdx around lines 20-30 ==="
cat -n docs/index.mdx | sed -n '15,30p'
echo ""
echo "=== Full content of docs/index.mdx ==="
cat docs/index.mdx
echo ""
echo "=== Verify actual files in docs/start/ ==="
ls -la docs/start/
echo ""
echo "=== Verify actual directories in docs/ ==="
ls -la docs/ | grep "^d"Length of output: 2613
The updated paths reference non-existent files and directories, creating broken links.
The changes to docs/index.mdx update the paths to:
./start/quickstart(line 23)./flashinfer-trace(line 25)
However, the actual files and directories still use the old names:
./docs/start/quick_start.mdx(notquickstart)./docs/flashinfer_trace/directory (notflashinfer-trace)
Either rename the files and directories to match the new paths, or revert the index.mdx changes to point to the existing paths.
🤖 Prompt for AI Agents
In docs/index.mdx around line 23, the updated link "Get Started:
./start/quickstart" (and similar at line 25 for ./flashinfer-trace) points to
non-existent paths; restore or correct links to the actual files. Change the
links to the existing paths (./start/quick_start.mdx and ./flashinfer_trace/) or
alternatively rename the target files/directories to match the new slugs; update
only the index.mdx links if you prefer not to move files, ensuring relative
paths match the repository structure and verify the links resolve.
| - Get Started: ./start/quickstart | ||
| - Installation: ./start/installation | ||
| - Schema (FlashInfer Trace): ./flashinfer_trace/flashinfer_trace | ||
| - Schema (FlashInfer Trace): ./flashinfer-trace |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify the updated FlashInfer Trace schema path exists and is accessible.
The path was updated from ./flashinfer_trace/flashinfer_trace to ./flashinfer-trace. Confirm that the documentation file exists at the new path and that the link renders correctly.
Run the following script to check for any remaining references to the old path in the codebase:
🏁 Script executed:
#!/bin/bash
# Description: Search for references to the old "flashinfer_trace/flashinfer_trace" path.
# Search for old path pattern
rg -n 'flashinfer_trace/flashinfer_trace|flashinfer_trace' --type md --type mdx --type ts --type js --type json | head -20Length of output: 27
I'll generate corrected scripts to verify the path update without the unrecognized file type:
🏁 Script executed:
#!/bin/bash
# Search for references to old flashinfer_trace path in markdown and code files
echo "=== Searching for old path references ==="
rg -n 'flashinfer_trace/flashinfer_trace' -g '*.md' -g '*.mdx' -g '*.ts' -g '*.js' -g '*.json'
echo ""
echo "=== Searching for standalone flashinfer_trace directory references ==="
rg -n 'flashinfer_trace' -g '*.md' -g '*.mdx' --no-heading -C 2 | head -30
echo ""
echo "=== Checking if new path exists ==="
fd -t d 'flashinfer-trace' || echo "No directory found with pattern flashinfer-trace"
echo ""
echo "=== Looking for new path reference in docs ==="
rg -n 'flashinfer-trace' -g '*.md' -g '*.mdx' -C 2Length of output: 5020
🏁 Script executed:
#!/bin/bash
# Search for actual schema documentation files
echo "=== Looking for schema documentation files ==="
fd -e md -e mdx | grep -E '(schema|flashinfer)'
echo ""
echo "=== Checking for flashinfer-trace directory or file ==="
fd 'flashinfer.*trace' -t f -t d
echo ""
echo "=== Listing docs directory structure ==="
find docs -type f -name '*.mdx' -o -name '*.md' | sortLength of output: 1043
🏁 Script executed:
#!/bin/bash
# Verify the exact index content and path mismatch
echo "=== Current docs/index.mdx lines 23-27 ==="
sed -n '23,27p' docs/index.mdx
echo ""
echo "=== Verify actual directory exists with underscores ==="
ls -la docs/flashinfer_trace/ | head -10
echo ""
echo "=== Check if hyphenated directory exists ==="
ls -la docs/flashinfer-trace/ 2>&1 || echo "Directory does not exist (as expected)"Length of output: 822
Update the path from ./flashinfer-trace to ./flashinfer_trace in docs/index.mdx line 25.
The updated path uses hyphens (./flashinfer-trace), but the actual directory in the repository uses underscores (docs/flashinfer_trace/). This breaks the documentation link. The directory exists at ./flashinfer_trace and contains the schema documentation files.
🤖 Prompt for AI Agents
In docs/index.mdx around line 25, the link text uses the wrong path
'./flashinfer-trace' but the repository directory is './flashinfer_trace';
update the path to './flashinfer_trace' so the documentation link points to the
existing docs/flashinfer_trace/ directory.
Now requires the python version to be 3.10-3.13. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Chores** * Minimum Python requirement raised to 3.10; package metadata updated accordingly. * Python 3.13 added to supported versions. * **Tests** * CI test matrix updated to run against Python 3.10–3.13. * **Documentation** * Installation/prerequisites updated to reflect new supported Python range (3.10–3.13). <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Leaderboard section is now visible on the main page. * Two preset threshold buttons added (Set p=1.0 and Set p=0.7) with updated "Set p=..." labels. * **Bug Fixes** * Correctness bar now reflects each entry's absolute pass-rate percentage for more accurate visualization. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Integrated Vercel Analytics for usage insights. * Automated generation and publication of Python API docs to the web app. * Improved docs routing with targeted caching for faster load times. * **Documentation** * Added a comprehensive Python API reference covering apply, tracing, and schema modules. * **Chores** * Updated build/prebuild flow and packaging to produce and serve docs; added doc build requirements and new ignore patterns. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
web/apps/web/app/leaderboard/client.tsx (1)
484-484: Verify the UX impact of absolute percentage visualization.The correctness bars now display absolute pass rates (
entry.passRate * 100%) rather than normalized values relative to the highest performer. This means:
- If the top performer has 80% pass rate, their bar will only fill 80% of the available space
- Comparing relative performance between authors becomes less visual
- Absolute performance is more transparent
Is this intentional? The previous normalized approach made it easier to visually compare rankings, while absolute percentages provide more honest representation of actual pass rates.
Additionally,
maxPassRate(line 288) is now only used for the guard conditionmaxPassRate > 0, but its name suggests normalization purposes. Consider either:
- Renaming it to
hasDataor similar- Removing it and checking
correctnessRanking.length > 0directlyApply this diff to simplify:
- const maxPassRate = correctnessRanking.length > 0 ? correctnessRanking[0].passRate : 0 + const hasCorrectnessData = correctnessRanking.length > 0 ... - const width = maxPassRate > 0 ? `${entry.passRate * 100}%` : "0%" + const width = hasCorrectnessData ? `${entry.passRate * 100}%` : "0%"web/apps/web/app/page.tsx (1)
64-67: Consider wrapping LeaderboardSection in Suspense for better loading UX.The LeaderboardSection is now rendered without a Suspense boundary, unlike KernelsSection (line 71). Based on the relevant code snippet from
section.tsx, this component performs significant computation including:
computeFastPCurvesForAuthorswith 300 sample points per definitioncomputeAuthorCorrectnessSummaryacross all datasets- Building detailed definition-author mappings
With large datasets, this synchronous computation could block the initial page render and delay Time to First Byte (TTFB).
Apply this diff to improve loading experience:
<div className="flex flex-col"> - <LeaderboardSection - entries={leaderboardEntries} - baselineLabel="Per-definition baselines" - /> + <Suspense fallback={<div className="container py-12 text-sm text-muted-foreground">Loading leaderboard…</div>}> + <LeaderboardSection + entries={leaderboardEntries} + baselineLabel="Per-definition baselines" + /> + </Suspense> <ModelsSection models={models} />.gitignore (1)
41-44: Keeppublic/assets trackable.
web/**/publicis so broad that any new static assets for the Next.js apps will be ignored, blocking us from adding things like favicons or images that the framework expects to live underpublic. Next.js treatspublic/as user-authored source assets, not build output, so we should only ignore the generated docs subdirectory instead of the whole tree.(nextjs.org)Replace the broad ignore with a scoped one:
-# Sphinx -.sphinx-deps -web/**/public +# Sphinx +.sphinx-deps +web/**/public/docs/api/python/ +web/**/public/docs/api/python/**docs/api/conf.py (2)
32-49: Version resolution logic is sound.The multiple fallback strategies for version resolution are appropriate. The broad exception catching is acceptable here since these are graceful fallbacks with a final default value.
Optionally, you could add debug logging for the exception paths to aid troubleshooting during doc builds:
if scm_get_version is not None: try: return scm_get_version(root=str(repo_root), fallback_version="0.0.0") - except Exception: - pass + except Exception as e: + print(f"Warning: setuptools_scm failed: {e}", file=sys.stderr) try: with open(repo_root / "pyproject.toml", "rb") as f: data = tomli.load(f) v = data.get("project", {}).get("version") if v: return v - except Exception: - pass + except Exception as e: + print(f"Warning: version from pyproject.toml failed: {e}", file=sys.stderr) return "0.0.0+docs"
134-134: Remove unnecessary f-string prefix.The f-string on line 134 has no placeholders.
Apply this diff:
- "nav_links": [{"title": "Docs", "url": f"/docs", "external": True}], + "nav_links": [{"title": "Docs", "url": "/docs", "external": True}],
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
docs/api/_static/brand/fib-dark.pngis excluded by!**/*.pngdocs/api/_static/brand/fib-light.pngis excluded by!**/*.pngweb/pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (21)
.github/workflows/unit_test.yaml(1 hunks).gitignore(1 hunks)docs/api/conf.py(1 hunks)docs/api/index.md(1 hunks)docs/api/requirements.txt(1 hunks)docs/api/rst/apply.rst(1 hunks)docs/api/rst/schema.rst(1 hunks)docs/api/rst/schema_definition.rst(1 hunks)docs/api/rst/schema_solution.rst(1 hunks)docs/api/rst/schema_trace.rst(1 hunks)docs/api/rst/schema_traceset.rst(1 hunks)docs/api/rst/tracing.rst(1 hunks)docs/start/installation.mdx(1 hunks)pyproject.toml(4 hunks)web/apps/web/app/layout.tsx(2 hunks)web/apps/web/app/leaderboard/client.tsx(2 hunks)web/apps/web/app/page.tsx(1 hunks)web/apps/web/next.config.ts(2 hunks)web/apps/web/package.json(1 hunks)web/apps/web/scripts/prebuild.mjs(2 hunks)web/turbo.json(1 hunks)
✅ Files skipped from review due to trivial changes (6)
- docs/api/rst/apply.rst
- docs/api/index.md
- docs/api/rst/schema_trace.rst
- docs/api/rst/schema_definition.rst
- docs/api/rst/schema_traceset.rst
- docs/api/rst/tracing.rst
🧰 Additional context used
🧬 Code graph analysis (1)
web/apps/web/app/page.tsx (1)
web/apps/web/app/leaderboard/section.tsx (1)
LeaderboardSection(35-106)
🪛 Ruff (0.14.3)
docs/api/conf.py
28-28: Do not catch blind exception: Exception
(BLE001)
39-40: try-except-pass detected, consider logging the exception
(S110)
39-39: Do not catch blind exception: Exception
(BLE001)
47-48: try-except-pass detected, consider logging the exception
(S110)
47-47: Do not catch blind exception: Exception
(BLE001)
134-134: f-string without any placeholders
Remove extraneous f prefix
(F541)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Run unit tests on ubuntu-latest and Python 3.11
- GitHub Check: Run unit tests on ubuntu-latest and Python 3.13
- GitHub Check: Run unit tests on ubuntu-latest and Python 3.12
- GitHub Check: Run unit tests on ubuntu-latest and Python 3.10
🔇 Additional comments (7)
web/apps/web/app/layout.tsx (1)
9-9: LGTM! Vercel Analytics integration follows best practices.The Analytics component is correctly imported from
@vercel/analytics/nextfor Next.js integration and placed outside the<Providers>wrapper, which is the recommended pattern to avoid potential context or hydration issues.Also applies to: 32-32
web/apps/web/package.json (1)
32-32: No action needed—the package version is current and secure.The verification confirms that
@vercel/[email protected]is the latest version available on npm and has no known security vulnerabilities.web/apps/web/app/leaderboard/client.tsx (1)
384-389: LGTM: Clear threshold controls.The updated button group provides direct control over the p-threshold value, replacing the previous top-N approach. The implementation is straightforward and the UI labels clearly communicate the action.
web/turbo.json (1)
10-10: LGTM!Correctly adds the generated documentation directory to build outputs, ensuring Turbo tracks the Sphinx-generated docs as part of the build artifacts.
docs/api/rst/schema.rst (1)
1-18: LGTM!Well-structured Sphinx documentation with proper toctree organization. The content clearly outlines the schema components.
docs/api/rst/schema_solution.rst (1)
1-13: LGTM!Properly structured Sphinx documentation using autopydantic_model and autoclass directives. The syntax is correct and consistent with the configured extensions.
docs/api/conf.py (1)
58-136: Sphinx configuration is well-structured.The extension setup, autodoc configuration, and theme options are properly configured for generating Python API documentation with autodoc_pydantic support.
update the link to quick start page
clean examples folder
Summary by CodeRabbit
Documentation
Chores
CI
Tests
Style
Web