Skip to content

Conversation

Patrick-Erichsen
Copy link
Collaborator

@Patrick-Erichsen Patrick-Erichsen commented Oct 20, 2025

Blocked by #7891
Resolves CON-4119

Note that this model will be included automatically in the handful of places we do model specific checks for tools, etc, because we are either checking for gpt for gpt-5, and this model is named gpt-5-codex


Summary by cubic

Added GPT-5 Codex support so it can be selected under OpenAI. Enables chat and edit workflows with up to 500k context and 150k max completion tokens.

  • New Features
    • Added GPT-5 Codex to the Add New Model UI and OpenAI provider list.
    • Registered model in llm-info with id gpt-5-codex, display name, 500k context, 150k max tokens, regex, and recommendedFor ["chat", "edit"].
    • Automatically covered by existing gpt/gpt-5 tool-use checks.

- Add GPT-5 Codex to llm-info package with 500k context and 150k max tokens
- Add model definition to models.ts for UI configuration
- Include GPT-5 Codex in OpenAI provider packages list
- Model supports chat and edit roles with tool_use capability
@Patrick-Erichsen Patrick-Erichsen marked this pull request as ready for review October 20, 2025 17:39
@Patrick-Erichsen Patrick-Erichsen requested a review from a team as a code owner October 20, 2025 17:39
@Patrick-Erichsen Patrick-Erichsen requested review from sestinj and removed request for a team October 20, 2025 17:39
@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label Oct 20, 2025
@github-actions
Copy link

github-actions bot commented Oct 20, 2025

✅ Review Complete

Code Review for PR #8350: feat: add GPT-5 Codex model support

Overall Assessment

The changes look solid and follow existing patterns correctly. The implementation is straightforward and properly integrates the new model into all necessary locations.

✅ Positives

  • Consistent with existing model definitions across all three files
  • Proper placement in the OpenAI provider packages list (maintained alphabetical ordering with other GPT-5 models)
  • Correct metadata: context length (500k), max completion tokens (150k), and recommended use cases

🔍 Issues Found

1. Missing model in core provider list (Medium Priority)

  • Location: packages/llm-info/src/providers/openai.ts
  • Issue: The new model is added to the modelInfo array but should also be checked against the main models array in the same file if one exists. Need to verify the model is properly exported and available for use.
  • Suggestion: Ensure the model is included in any exports or model lists that downstream code relies on.

2. Tool support verification needed (Low Priority)

  • Location: General concern
  • Issue: The PR description mentions "automatically covered by existing gpt/gpt-5 tool-use checks" but we should verify:
    • Does gpt-5-codex match the /gpt-5/ regex in openai.ts line 84? No, it doesn't - the regex /gpt-5/ will match "gpt-5-codex" but the more specific model definition should take precedence.
    • Are there any other places checking for model names with more specific patterns like startsWith('gpt-5') that might not include the codex variant?
  • Suggestion: Search for tool support checks to confirm they work with the hyphenated model name:
    grep -r "gpt-5" --include="*.ts" core/

3. Consider adding to autocomplete models (Question)

  • Location: May be elsewhere in the codebase
  • Issue: Since this is a code-focused model, should it also be marked as suitable for autocomplete?
  • Suggestion: Check if autocomplete models are configured elsewhere and whether GPT-5 Codex should be included.

Minor Observations

  • The underscores in contextLength: 500_000 are good for readability (line 1088)
  • Icon and provider settings are correctly configured

Recommended Actions Before Merge

  1. ✅ Verify the regex patterns in tool support code actually match "gpt-5-codex"
  2. ✅ Confirm the model is properly exported from llm-info package
  3. ⚠️ Consider if autocomplete support should be added

The code quality is good and follows established patterns. Main concern is ensuring downstream tool-use detection works as claimed in the PR description.


Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 3 files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:S This PR changes 10-29 lines, ignoring generated files.

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

1 participant