ci+fix: add update-providers workflow + non-destructive fetch_models#914
Open
Spherrrical wants to merge 2 commits intomainfrom
Open
ci+fix: add update-providers workflow + non-destructive fetch_models#914Spherrrical wants to merge 2 commits intomainfrom
Spherrrical wants to merge 2 commits intomainfrom
Conversation
Adds .github/workflows/update-providers.yml so the provider_models.yaml
refresh can be triggered via workflow_dispatch (manual UI / gh CLI) or
repository_dispatch (from the PlanoHelper Slack bot).
The workflow:
- Runs cargo run --bin fetch_models --features model-fetch with all
provider API keys + AWS creds available as env from secrets.
- Opens a PR via peter-evans/create-pull-request scoped to just
crates/hermesllm/src/bin/provider_models.yaml.
- On repository_dispatch, posts the PR link (or failure) back to Slack
via the response_url in the dispatch payload.
Includes keys for the providers fetch_models reads today (OpenAI,
Anthropic, Mistral, DeepSeek, Grok, Moonshot, Dashscope/Qwen, Zhipu,
Xiaomi/Mimo, Google) plus forward-compat env for OpenRouter and Vercel
AI Gateway (added in #902).
The workflow has no push: or schedule: trigger, so landing this is inert
until something dispatches it. Required secrets are documented in
apps/planohelper/README.md (in a follow-up PR).
Previously fetch_models rebuilt provider_models.yaml from scratch on
every run, so running locally (or in CI) without e.g. ANTHROPIC_API_KEY,
GOOGLE_API_KEY, or AWS Bedrock credentials would silently drop those
providers' entries from the file. The user only meant to refresh what
they had keys for.
Now fetch_models loads the existing provider_models.yaml first and
treats each provider independently:
- Successful fetch -> entry replaced with fresh data ("updated")
- Missing API key -> existing entry preserved ("skipped")
- Failed fetch -> existing entry preserved ("failed, kept existing")
- Missing AWS creds -> Amazon entry preserved instead of running
`aws bedrock list-foundation-models` and erroring out
If the file doesn't exist yet it starts fresh, same as before. If the
file exists but can't be parsed, the binary refuses to overwrite it and
exits with an error rather than silently nuking it.
Other changes that come along for the ride:
- HashMap -> BTreeMap for the providers map. Output YAML now has a
stable, alphabetical provider order across runs (eliminates
HashMap-iteration churn in PR diffs). The first PR after this
lands will reorder existing entries one time.
- Per-provider summary at the end (updated / skipped / failed)
so the workflow logs and Slack PR body make it obvious what
actually changed vs. what was left alone.
- File-level usage comment updated to match the new behavior and
list the additional env vars (MISTRAL_API_KEY, MIMO_API_KEY).
No tests existed for this binary; manually verified with `env -i` (no
keys at all) that all 13 existing providers are preserved with their
original model counts.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Two related changes that together make the
provider_models.yamlrefresh safe to run from anywhere with any subset of API keys.
1.
ci: add.github/workflows/update-providers.ymlLands the workflow that the PlanoHelper Slack bot will dispatch. Triggers:
workflow_dispatch— Actions UI "Run workflow" button orgh workflow run update-providers.yml --ref <branch>. Useful fortesting the workflow itself from any feature branch.
repository_dispatch(event typeupdate-providers) — sent by theSlack bot (incoming in a follow-up PR). Always runs
main's versionof the workflow against
main's code, by GitHub design.The workflow has no
push:orschedule:trigger, so merging thisPR is inert — nothing runs until something explicitly dispatches it.
What it does on dispatch:
main, installs the stable Rust toolchain, configures AWScredentials, restores a cargo cache.
cargo run --bin fetch_models --features model-fetchwith allprovider API keys piped in as env from repo secrets.
peter-evans/create-pull-request@v7on branchbot/update-providers-<run_id>, scoped to justcrates/hermesllm/src/bin/provider_models.yaml.repository_dispatch, posts the resulting PR link (or a failuremessage with a logs button) back to Slack via the
response_urlcarried in the dispatch
client_payload.2.
fix(fetch_models): non-destructive mergePreviously
fetch_modelsrebuiltprovider_models.yamlfrom scratch onevery run, so running locally (or in CI) without e.g.
ANTHROPIC_API_KEY,GOOGLE_API_KEY, or AWS Bedrock credentials wouldsilently drop those providers' entries from the file — even though
the user only meant to refresh what they had keys for.
After this change, each provider is treated independently:
If the file doesn't exist yet it starts fresh, same as before. If the
file exists but can't be parsed, the binary refuses to overwrite it and
exits with an error.
Bonus changes that come along:
HashMap→BTreeMapfor the providers map. Output YAML now has astable, alphabetical provider order across runs (eliminates
HashMap-iteration churn in PR diffs). The first dispatched run after
this lands will produce a one-time reorder PR.
workflow logs and Slack PR body make it obvious what actually changed
vs. what was left alone.
the env vars added in feat(providers): add Vercel AI Gateway and OpenRouter support #902.
Manually verified by running with
env -i(no keys at all): all 13existing providers are preserved with their original model counts; the
only file diff is the one-time alphabetical reorder.
Required secrets (set before first dispatch)
Under Settings → Secrets and variables → Actions. Missing keys are
fine — providers without a key are now preserved from the existing file.
Provider keys (consumed by
fetch_models.rs):OPENAI_API_KEY,ANTHROPIC_API_KEY,MISTRAL_API_KEY,DEEPSEEK_API_KEY,GROK_API_KEY,MOONSHOT_API_KEY,DASHSCOPE_API_KEY,ZHIPU_API_KEY,MIMO_API_KEY,GOOGLE_API_KEYForward-compat keys (passed through to the workflow env for #902's
new providers —
fetch_models.rsdoesn't fetch from them yet, butadding the env now means no workflow edit when it does):
OPENROUTER_API_KEY,AI_GATEWAY_API_KEYAWS (for Bedrock / Amazon models):
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_REGIONTest plan
After merge:
it'll just produce the one-time reorder PR).
gh workflow run update-providers.yml --ref mainand watchwith
gh run watch.chore: refresh provider_models.yamlisopened. First run will be the alphabetical reorder + any genuine
provider updates.
that provider's entry is preserved (not dropped) and the run
log says
⊘ <provider>: <KEY> not set (keeping existing N models).Follow-ups (separate PRs)
apps/planohelper— the Slack bot Vercel app that sends therepository_dispatch.fetch_models.rsto actually fetch from OpenRouter and VercelAI Gateway (both have OpenAI-compatible
/v1/modelsendpoints — smalladdition to the existing
provider_configsvec).