Skip to content

fix(codex): refresh provider on thread resume#302

Merged
cyhhao merged 5 commits into
masterfrom
fix/codex-resume-provider-override
May 15, 2026
Merged

fix(codex): refresh provider on thread resume#302
cyhhao merged 5 commits into
masterfrom
fix/codex-resume-provider-override

Conversation

@cyhhao
Copy link
Copy Markdown
Owner

@cyhhao cyhhao commented May 15, 2026

What

  • Sends the currently effective Codex modelProvider when resuming a persisted thread.
  • Resolves project-scoped provider settings before falling back to the top-level provider or built-in openai.
  • Keeps old threads working after users switch between OAuth and API-key relay modes.

Evidence

  • Unit: added Codex config/provider resolution coverage.
  • Contract: added Codex thread/resume payload coverage.
  • Scenario: manually reproduced resume payload override for an old thread in the local app-server path.
  • Residual manual checks: no IM platform regression run; change is scoped to Codex app-server resume params.

Tests

  • uv run pytest tests/test_codex_agent.py tests/test_codex_config.py tests/test_settings_disk_fallback.py
  • uv run ruff check modules/agents/codex/agent.py vibe/codex_config.py tests/test_codex_agent.py tests/test_codex_config.py

@cyhhao
Copy link
Copy Markdown
Owner Author

cyhhao commented May 15, 2026

@codex Review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d5cee00423

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread modules/agents/codex/agent.py Outdated
Comment on lines +567 to +569
model_provider = read_active_model_provider(cwd=getattr(request, "working_path", None))
if model_provider:
resume_params["modelProvider"] = model_provider
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Stop forcing modelProvider on every thread/resume

thread/resume is now always sent with modelProvider, but Codex app-server docs state that supplying modelProvider disables resume’s persisted model/reasoningEffort fallback (openai/codex codex-rs/app-server/README.md, around line 482). That means resuming a thread created with a non-default model will silently re-resolve model settings from current config instead of preserving the thread’s last persisted model, so users can unexpectedly switch models after resume (especially when turn/start omits an explicit model). This regression is introduced by adding resume_params["modelProvider"] unconditionally here.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 00de2f8. Resume now reads the stored thread metadata first and only sends modelProvider when the persisted thread provider differs from the currently resolved Codex provider. Matching-provider resumes omit the override, so Codex keeps its persisted model / reasoning-effort fallback.

@cyhhao
Copy link
Copy Markdown
Owner Author

cyhhao commented May 15, 2026

@codex Review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 00de2f88cc

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread vibe/codex_config.py Outdated
Comment on lines +94 to +96
config_path, _ = get_codex_config_paths(home)
toml_data = _load_toml(config_path)
if cwd is not None:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Respect profile/project config in provider resolution

read_active_model_provider() only reads $CODEX_HOME/config.toml and then derives model_provider from that single document, but Codex merges config in a higher-precedence order (global config, then project .codex/config.toml, then active profile overrides). Because _start_or_resume_thread() now uses this helper to decide whether to send modelProvider on thread/resume, a user with provider overrides in project config or [profiles.<name>] can get an incorrect override injected, which can route resumes to the wrong provider and break continuation for provider-bound threads.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in d0b05e9. The resume provider check no longer parses ~/.codex/config.toml in Vibe Remote. It asks the running Codex app-server via config/read with the request cwd, then compares that effective modelProvider with the stored thread metadata. That leaves Codex responsible for profile/project/config-layer precedence and avoids injecting a stale provider override.

@cyhhao
Copy link
Copy Markdown
Owner Author

cyhhao commented May 15, 2026

@codex Review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d0b05e9281

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread modules/agents/codex/agent.py Outdated
Comment on lines +652 to +653
model_provider = config_obj.get("modelProvider")
if isinstance(model_provider, str) and model_provider.strip():
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Read model_provider from config/read payload

_read_effective_model_provider looks up config_obj.get("modelProvider"), but app-server config/read returns config fields in snake_case (Config is serialized with rename_all = "snake_case" and includes model_provider). In this path current_provider is always None, so _resolve_resume_model_provider_override never injects modelProvider into thread/resume, which means provider refresh on resumed threads silently does not happen when users switch auth/provider modes.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 1e7ea72. config/read config fields are snake_case, so _read_effective_model_provider now reads model_provider. The tests were updated to use snake_case for config/read while keeping camelCase modelProvider for thread/read thread metadata.

@cyhhao
Copy link
Copy Markdown
Owner Author

cyhhao commented May 15, 2026

@codex Review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 1e7ea72ba5

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread modules/agents/codex/agent.py Outdated
Comment on lines +628 to +630
if stored_provider.strip() == current_provider:
return None
return current_provider
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep persisted provider when resuming cross-provider threads

Only overriding modelProvider based on current config when it differs from the stored thread provider can break valid cross-provider resumes: this path now forces thread/resume onto the current provider for any mismatch, even when the persisted thread was created under a different provider and must continue there. In those cases, resume can fail or misroute history because the provider affinity encoded in the stored thread is discarded at resume time.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in db97d6c. The provider override is now limited to transitions between Vibe Remote-managed Codex provider IDs (openai and openai-managed). Other mismatches are treated as intentional cross-provider sessions and resume without modelProvider, preserving Codex persisted provider affinity. Added a regression test for an unmanaged anthropic thread while the current config points at openai-managed.

@cyhhao
Copy link
Copy Markdown
Owner Author

cyhhao commented May 15, 2026

@codex Review

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. Hooray!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@cyhhao cyhhao merged commit fc641e8 into master May 15, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant