Skip to content

fix: fall back on model support errors during auth rotation#2222

Open
kaitranntt wants to merge 2 commits intorouter-for-me:mainfrom
kaitranntt:kai/fix/758-openai-proxy-alternating-model-support
Open

fix: fall back on model support errors during auth rotation#2222
kaitranntt wants to merge 2 commits intorouter-for-me:mainfrom
kaitranntt:kai/fix/758-openai-proxy-alternating-model-support

Conversation

@kaitranntt
Copy link
Contributor

Closes #2221

Summary

  • treat model-support 400 / 422 responses as fallback-eligible instead of terminal request-shape failures
  • suspend the failing auth-model path so round-robin does not immediately reselect it
  • add regression coverage for pooled upstream fallback and cross-auth fallback

Validation

  • go test ./sdk/cliproxy/auth/...
  • go test ./sdk/cliproxy/...
  • go test ./...

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the authentication rotation mechanism by introducing more intelligent error handling for model support issues. Previously, certain model-related errors would halt the process; now, the system can gracefully fall back to alternative authentication methods or models. This change improves the resilience and reliability of the proxy when dealing with upstream model limitations or unavailability, ensuring a smoother user experience by attempting viable alternatives.

Highlights

  • Model Support Error Handling: Model-support 400 (Bad Request) and 422 (Unprocessable Entity) responses are now treated as fallback-eligible instead of terminal request-shape failures, allowing the system to attempt other authentication methods or models.
  • Auth-Model Path Suspension: The failing authentication-model path is now suspended after a model support error, preventing the round-robin selection from immediately retrying the same problematic path.
  • Regression Coverage: New regression tests have been added to cover scenarios involving pooled upstream fallback and cross-authentication fallback, ensuring the new error handling behaves as expected.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses the issue of model support errors during authentication rotation. The introduction of isModelSupportResultError and its integration into the MarkResult function correctly identifies and handles these specific errors, allowing for appropriate fallback and suspension of the failing auth-model path. The update to isRequestInvalidError ensures that model support errors are no longer treated as terminal request-shape failures, which is crucial for the intended fallback mechanism. The added regression tests provide good coverage and validate the new behavior for both bad requests and unprocessable entities, confirming that the system now falls back and suspends auths as expected. Overall, the changes are well-implemented and improve the robustness of the authentication rotation logic.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 5135c22cd6

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +1630 to 1634
if isModelSupportResultError(result.Error) {
next := now.Add(12 * time.Hour)
state.NextRetryAfter = next
suspendReason = "not_found"
suspendReason = "model_not_supported"
shouldSuspendModel = true

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve model-support suspensions across same-auth pool fallback

When a requested model expands to multiple upstream candidates on the same auth, this new branch suspends result.Model, but executeMixedOnce/executeStreamWithModelPool always populate Result.Model with the alias routeModel rather than the failing upstream candidate. If the first candidate returns a model-support 400/422 and a later candidate succeeds, the success path resets that same alias state and calls ResumeClientModel, so the unsupported upstream candidate is eligible again as soon as the pool rotates back. In practice the new suspension never sticks for OpenAI-compat alias pools, which defeats the stated goal of avoiding immediate reselection of the bad path.

Useful? React with 👍 / 👎.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in ea3e0b7. The fallback state is now keyed to the concrete upstream candidate for pooled aliases, later requests skip suspended upstream members, and a fully blocked pool auth no longer burns retry budget before a healthy fallback auth can run. Added execute/stream/count regressions for repeated pooled requests and the retry-budget edge case.

Comment on lines +1949 to +1950
if isModelSupportError(err) {
return false

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Record model-support fallback failures in ExecuteCount

After this change, ExecuteCount will also treat model-support 400/422s as fallback-eligible, but executeCountMixedOnce still only calls m.hook.OnResult instead of MarkResult (around sdk/cliproxy/auth/conductor.go:1087-1107). In setups that run token counting before execution, the current call can fall through to another auth/model, yet the unsupported auth/model is never marked unavailable, so every later count request retries the same failing credential again. The new behavior is therefore sticky for Execute/ExecuteStream, but not for ExecuteCount.

Useful? React with 👍 / 👎.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in ea3e0b7. ExecuteCount now records through MarkResult, so model-support fallback updates auth/model availability the same way as execute/stream. Added count-path regressions to cover repeated pooled requests after a model-support failure.

@kaitranntt
Copy link
Contributor Author

Pushed follow-up commit ea3e0b71.

This addresses both Codex review findings and also fixes one related stream edge case: pre-first-byte stream bootstrap failures now continue outer auth rotation when another auth is still available, while preserving the terminal error-stream contract when nothing else can run.

Verification run on this branch:

  • go test ./sdk/cliproxy/auth/...
  • go test ./sdk/api/handlers/...
  • go test ./sdk/cliproxy/...
  • go test ./...

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ea3e0b713e

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

continue
}
m.hook.OnResult(execCtx, result)
m.MarkResult(execCtx, result)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Don't clear model cooldowns on successful token counts

This now calls MarkResult on every successful ExecuteCount, but several CountTokens paths do not prove the auth/model is usable for real execution—for example OpenAICompatExecutor.CountTokens (internal/runtime/executor/openai_compat_executor.go:307-332) and CodexExecutor.CountTokens (internal/runtime/executor/codex_executor.go:402-435) are purely local tokenization. If a model was suspended after a real 429 or model-support failure, a later /count_tokens success will clear NextRetryAfter and ResumeClientModel anyway. In setups that count tokens before sending requests, that immediately undoes the new suspension/cooldown logic and routes traffic back to throttled or unsupported credentials.

Useful? React with 👍 / 👎.

Comment on lines +440 to +443
func executionResultModel(routeModel, upstreamModel string, pooled bool) string {
if pooled {
if resolved := strings.TrimSpace(upstreamModel); resolved != "" {
return resolved

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve alias cooldowns for pooled request retries

In pooled alias routes, executionResultModel now records retriable failures under the concrete upstream names (qwen3.5-plus, glm-5, etc.), but shouldRetryAfterError still asks closestCooldownWait about the requested alias (req.Model). Since isAuthBlockedForModel only checks the model key it is given, once every candidate in the pool is cooling down from a 429/5xx the manager finds no blocked auth for the alias and skips the configured request_retry wait entirely. This regresses retry behavior for OpenAI-compat alias pools compared with the previous alias-scoped state.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug: model-support 400/422 can cause alternating fallback failures

1 participant