-
Notifications
You must be signed in to change notification settings - Fork 61
fix(codex): refresh provider on thread resume #302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
d5cee00
00de2f8
d0b05e9
1e7ea72
db97d6c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -40,6 +40,7 @@ | |
| # ``LEGACY_MANAGED_PROVIDER_IDS`` and the migration in ``apply_codex_auth``. | ||
| MANAGED_PROVIDER_ID = "openai-managed" | ||
| LEGACY_MANAGED_PROVIDER_IDS = ("openai",) | ||
| DEFAULT_PROVIDER_ID = "openai" | ||
|
|
||
| # Codex's top-level ``cli_auth_credentials_store`` controls where the CLI | ||
| # reads/writes cached credentials: ``file`` → ``~/.codex/auth.json``, | ||
|
|
@@ -75,6 +76,62 @@ def get_codex_config_paths(home: Path | None = None) -> tuple[Path, Path]: | |
| return codex_home / "config.toml", codex_home / "auth.json" | ||
|
|
||
|
|
||
| def read_active_model_provider( | ||
| home: Path | None = None, | ||
| cwd: str | Path | None = None, | ||
| ) -> str: | ||
| """Return the provider id Codex will use for a thread in ``cwd``. | ||
|
|
||
| Codex persists a thread's provider in session metadata and reuses it on | ||
| resume unless the caller passes an explicit ``modelProvider`` override. | ||
| Vibe Remote uses this helper to migrate resumed threads after the user | ||
| switches Codex auth/provider settings, for example OAuth -> API key relay. | ||
| Project-scoped settings take precedence over the top-level provider, which | ||
| matches Codex's normal ``thread/start`` resolution for a request ``cwd``. | ||
| If no top-level provider is configured, Codex falls back to its built-in | ||
| OpenAI provider, whose id is ``openai``. | ||
| """ | ||
| config_path, _ = get_codex_config_paths(home) | ||
| toml_data = _load_toml(config_path) | ||
| if cwd is not None: | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Useful? React with 👍 / 👎.
Owner
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed in d0b05e9. The resume provider check no longer parses |
||
| project_provider = _project_model_provider(toml_data, Path(cwd).expanduser()) | ||
| if project_provider: | ||
| return project_provider | ||
| model_provider = toml_data.get("model_provider") | ||
| if isinstance(model_provider, str) and model_provider.strip(): | ||
| return model_provider.strip() | ||
| return DEFAULT_PROVIDER_ID | ||
|
|
||
|
|
||
| def _project_model_provider(toml_data: Dict[str, Any], cwd: Path) -> str | None: | ||
| """Return the nearest project-scoped provider for ``cwd`` if configured.""" | ||
| projects = toml_data.get("projects") | ||
| if not isinstance(projects, dict): | ||
| return None | ||
|
|
||
| try: | ||
| resolved_cwd = cwd.resolve(strict=False) | ||
| except OSError: | ||
| resolved_cwd = cwd.absolute() | ||
|
|
||
| best_match: tuple[int, str] | None = None | ||
| for raw_path, settings in projects.items(): | ||
| if not isinstance(raw_path, str) or not isinstance(settings, dict): | ||
| continue | ||
| provider = settings.get("model_provider") | ||
| if not isinstance(provider, str) or not provider.strip(): | ||
| continue | ||
| try: | ||
| project_path = Path(raw_path).expanduser().resolve(strict=False) | ||
| except OSError: | ||
| project_path = Path(raw_path).expanduser().absolute() | ||
| if resolved_cwd == project_path or project_path in resolved_cwd.parents: | ||
| score = len(project_path.parts) | ||
| if best_match is None or score > best_match[0]: | ||
| best_match = (score, provider.strip()) | ||
| return best_match[1] if best_match else None | ||
|
|
||
|
|
||
| def _load_toml(path: Path) -> Dict[str, Any]: | ||
| if not path.exists(): | ||
| return {} | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thread/resumeis now always sent withmodelProvider, but Codex app-server docs state that supplyingmodelProviderdisables resume’s persistedmodel/reasoningEffortfallback (openai/codexcodex-rs/app-server/README.md, around line 482). That means resuming a thread created with a non-default model will silently re-resolve model settings from current config instead of preserving the thread’s last persisted model, so users can unexpectedly switch models after resume (especially whenturn/startomits an explicitmodel). This regression is introduced by addingresume_params["modelProvider"]unconditionally here.Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in 00de2f8. Resume now reads the stored thread metadata first and only sends
modelProviderwhen the persisted thread provider differs from the currently resolved Codex provider. Matching-provider resumes omit the override, so Codex keeps its persisted model / reasoning-effort fallback.