-
Couldn't load subscription status.
- Fork 645
LLM Benchmarking #3486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
LLM Benchmarking #3486
Conversation
| name: Verify docs/rustdoc_json hashes | ||
| if: ${{ github.event_name == 'pull_request' }} | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - uses: actions/checkout@v4 | ||
|
|
||
| - uses: dtolnay/rust-toolchain@stable | ||
| - uses: Swatinem/rust-cache@v2 | ||
|
|
||
| - name: Run hash check (both langs) | ||
| working-directory: public/crates/xtask-llm-benchmark | ||
| run: cargo llm ci-check |
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 5 days ago
The best way to fix the problem is to restrict the permissions of the GITHUB_TOKEN explicitly for the llm_ci_check job, by adding a permissions: block just below its job name in .github/workflows/ci.yml. As per the CodeQL suggestion and established best practices, the minimal required permission is contents: read, which grants the job read-only access to code in the repository. This allows the job to perform checkout and CI actions without granting unnecessary write permissions. No changes are required to existing imports, steps, or functionality.
Implement this by editing .github/workflows/ci.yml to add:
permissions:
contents: readdirectly beneath the name: field for the llm_ci_check job (line 413).
-
Copy modified lines R414-R415
| @@ -411,6 +411,8 @@ | ||
|
|
||
| llm_ci_check: | ||
| name: Verify docs/rustdoc_json hashes | ||
| permissions: | ||
| contents: read | ||
| if: ${{ github.event_name == 'pull_request' }} | ||
| runs-on: ubuntu-latest | ||
| steps: |
Notes
Description of Changes
Introduce a new LLM benchmarking app and supporting code.
llmwith subcommandsrun,routes list,diff,ci-check.--lang,--categories,--tasks,--providers,--models.provider:model) with HTTP LLM Vendor clients; env-driven keys/base URLs.DEVELOP.mdincludescargo llm …usage.This PR is the initial addition of the app and its modules (runner, config, routes, prompt/segmentation, scorers, schema/types, defaults/constants/paths/hashing/combine, publishers, spacetime guard, HTML stats viewer).
How it works
Pick what to run
--tasks 0,7,12), or a language (--lang rust|csharp), or categories (--categories basics,schema).--providers …,--models …).Resolve routes
openai:gpt-5).Build context
Execute calls
Score outputs
Update results file
API and ABI breaking changes
None. New application and modules; no existing public APIs/ABIs altered.
Expected complexity level and risk
4/5. New CLI, routing, evaluation, and artifact format.
LLM_BENCH_CONCURRENCY/LLM_BENCH_ROUTE_CONCURRENCY.Testing
I ran the full test matrix and generated results for every task against every vendor, model, and language (rust + C#). I also tested the CI check locally using act.
Please verify
llm run --tasks 0,1,2(explicitrun)llm run --lang rust --categories basics(filters)llm run --categories basics,schema(multiple categories)llm run --lang csharp(language switch)llm run --providers openai,anthropic --models "openai:gpt-5 anthropic:claude-sonnet-4-5"(provider/model limits)llm run --hash-only(dry integrity)llm run --goldens-only(test goldens only)llm run --force(skip hash check)llm ci-check