Skip to content

ci+fix: add update-providers workflow + non-destructive fetch_models#914

Open
Spherrrical wants to merge 2 commits intomainfrom
musa/update-providers-workflow
Open

ci+fix: add update-providers workflow + non-destructive fetch_models#914
Spherrrical wants to merge 2 commits intomainfrom
musa/update-providers-workflow

Conversation

@Spherrrical
Copy link
Copy Markdown
Collaborator

@Spherrrical Spherrrical commented Apr 24, 2026

Summary

Two related changes that together make the provider_models.yaml
refresh safe to run from anywhere with any subset of API keys.

1. ci: add .github/workflows/update-providers.yml

Lands the workflow that the PlanoHelper Slack bot will dispatch. Triggers:

  • workflow_dispatch — Actions UI "Run workflow" button or
    gh workflow run update-providers.yml --ref <branch>. Useful for
    testing the workflow itself from any feature branch.
  • repository_dispatch (event type update-providers) — sent by the
    Slack bot (incoming in a follow-up PR). Always runs main's version
    of the workflow against main's code, by GitHub design.

The workflow has no push: or schedule: trigger, so merging this
PR is inert — nothing runs until something explicitly dispatches it.

What it does on dispatch:

  1. Checks out main, installs the stable Rust toolchain, configures AWS
    credentials, restores a cargo cache.
  2. Runs cargo run --bin fetch_models --features model-fetch with all
    provider API keys piped in as env from repo secrets.
  3. Opens a PR via peter-evans/create-pull-request@v7 on branch
    bot/update-providers-<run_id>, scoped to just
    crates/hermesllm/src/bin/provider_models.yaml.
  4. On repository_dispatch, posts the resulting PR link (or a failure
    message with a logs button) back to Slack via the response_url
    carried in the dispatch client_payload.

2. fix(fetch_models): non-destructive merge

Previously fetch_models rebuilt provider_models.yaml from scratch on
every run, so running locally (or in CI) without e.g.
ANTHROPIC_API_KEY, GOOGLE_API_KEY, or AWS Bedrock credentials would
silently drop those providers' entries from the file — even though
the user only meant to refresh what they had keys for.

After this change, each provider is treated independently:

Outcome Behavior
Successful fetch Entry replaced with fresh data ("updated")
Missing API key Existing entry preserved ("skipped")
Failed fetch Existing entry preserved ("failed")
Missing AWS creds Amazon entry preserved (Bedrock not called)

If the file doesn't exist yet it starts fresh, same as before. If the
file exists but can't be parsed, the binary refuses to overwrite it and
exits with an error.

Bonus changes that come along:

  • HashMapBTreeMap for the providers map. Output YAML now has a
    stable, alphabetical provider order across runs (eliminates
    HashMap-iteration churn in PR diffs). The first dispatched run after
    this lands will produce a one-time reorder PR.
  • Per-provider summary at the end (updated / skipped / failed) so the
    workflow logs and Slack PR body make it obvious what actually changed
    vs. what was left alone.
  • Updated file-level usage comment to match the new behavior and list
    the env vars added in feat(providers): add Vercel AI Gateway and OpenRouter support #902.

Manually verified by running with env -i (no keys at all): all 13
existing providers are preserved with their original model counts; the
only file diff is the one-time alphabetical reorder.

Required secrets (set before first dispatch)

Under Settings → Secrets and variables → Actions. Missing keys are
fine — providers without a key are now preserved from the existing file.

Provider keys (consumed by fetch_models.rs):

  • OPENAI_API_KEY, ANTHROPIC_API_KEY, MISTRAL_API_KEY,
    DEEPSEEK_API_KEY, GROK_API_KEY, MOONSHOT_API_KEY,
    DASHSCOPE_API_KEY, ZHIPU_API_KEY, MIMO_API_KEY, GOOGLE_API_KEY

Forward-compat keys (passed through to the workflow env for #902's
new providers — fetch_models.rs doesn't fetch from them yet, but
adding the env now means no workflow edit when it does):

  • OPENROUTER_API_KEY, AI_GATEWAY_API_KEY

AWS (for Bedrock / Amazon models):

  • AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION

Test plan

After merge:

  • Confirm "Update provider_models.yaml" appears under Actions.
  • Add the secrets above (any subset works — even none works now;
    it'll just produce the one-time reorder PR).
  • Run gh workflow run update-providers.yml --ref main and watch
    with gh run watch.
  • Verify a PR titled chore: refresh provider_models.yaml is
    opened. First run will be the alphabetical reorder + any genuine
    provider updates.
  • Close (or merge) the test PR.
  • Run again with one provider key intentionally removed; verify
    that provider's entry is preserved (not dropped) and the run
    log says ⊘ <provider>: <KEY> not set (keeping existing N models).

Follow-ups (separate PRs)

  • apps/planohelper — the Slack bot Vercel app that sends the
    repository_dispatch.
  • Update fetch_models.rs to actually fetch from OpenRouter and Vercel
    AI Gateway (both have OpenAI-compatible /v1/models endpoints — small
    addition to the existing provider_configs vec).

Adds .github/workflows/update-providers.yml so the provider_models.yaml
refresh can be triggered via workflow_dispatch (manual UI / gh CLI) or
repository_dispatch (from the PlanoHelper Slack bot).

The workflow:
  - Runs cargo run --bin fetch_models --features model-fetch with all
    provider API keys + AWS creds available as env from secrets.
  - Opens a PR via peter-evans/create-pull-request scoped to just
    crates/hermesllm/src/bin/provider_models.yaml.
  - On repository_dispatch, posts the PR link (or failure) back to Slack
    via the response_url in the dispatch payload.

Includes keys for the providers fetch_models reads today (OpenAI,
Anthropic, Mistral, DeepSeek, Grok, Moonshot, Dashscope/Qwen, Zhipu,
Xiaomi/Mimo, Google) plus forward-compat env for OpenRouter and Vercel
AI Gateway (added in #902).

The workflow has no push: or schedule: trigger, so landing this is inert
until something dispatches it. Required secrets are documented in
apps/planohelper/README.md (in a follow-up PR).
Previously fetch_models rebuilt provider_models.yaml from scratch on
every run, so running locally (or in CI) without e.g. ANTHROPIC_API_KEY,
GOOGLE_API_KEY, or AWS Bedrock credentials would silently drop those
providers' entries from the file. The user only meant to refresh what
they had keys for.

Now fetch_models loads the existing provider_models.yaml first and
treats each provider independently:

  - Successful fetch -> entry replaced with fresh data ("updated")
  - Missing API key  -> existing entry preserved ("skipped")
  - Failed fetch     -> existing entry preserved ("failed, kept existing")
  - Missing AWS creds -> Amazon entry preserved instead of running
    `aws bedrock list-foundation-models` and erroring out

If the file doesn't exist yet it starts fresh, same as before. If the
file exists but can't be parsed, the binary refuses to overwrite it and
exits with an error rather than silently nuking it.

Other changes that come along for the ride:

  - HashMap -> BTreeMap for the providers map. Output YAML now has a
    stable, alphabetical provider order across runs (eliminates
    HashMap-iteration churn in PR diffs). The first PR after this
    lands will reorder existing entries one time.
  - Per-provider summary at the end (updated / skipped / failed)
    so the workflow logs and Slack PR body make it obvious what
    actually changed vs. what was left alone.
  - File-level usage comment updated to match the new behavior and
    list the additional env vars (MISTRAL_API_KEY, MIMO_API_KEY).

No tests existed for this binary; manually verified with `env -i` (no
keys at all) that all 13 existing providers are preserved with their
original model counts.
@Spherrrical Spherrrical changed the title ci: add update-providers workflow ci+fix: add update-providers workflow + non-destructive fetch_models Apr 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant