Skip to content

Conversation

@vitali87
Copy link
Owner

Summary

  • Add /model <provider:model> command to switch LLM models during interactive sessions
  • Add /model command (no args) to show current model
  • Add /help command to display available commands
  • Properly inherit API keys when switching between models of the same provider
  • Support switching to ollama with default local endpoint

Test plan

  • Unit tests added for _handle_model_command function
  • Tests for model override passed to agent.run()
  • Tests for multiple model switches in sequence
  • All 16 tests passing

Closes #59

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @vitali87, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the interactive session experience by introducing dynamic LLM model switching capabilities. Users can now seamlessly change the underlying large language model used by the agent during a session, providing greater flexibility for experimentation and task-specific model selection. The changes also include a new help command for improved user guidance and robust error handling for model configuration.

Highlights

  • Dynamic Model Switching: Introduced a /model <provider:model> command, allowing users to switch Large Language Models (LLMs) during interactive sessions for greater flexibility.
  • Current Model Display: Added a /model command (without arguments) to display the currently active LLM, providing immediate feedback to the user.
  • Help Command: Implemented a /help command to list all available interactive commands, improving user guidance.
  • API Key Inheritance: Ensured that API keys are properly inherited when switching between models from the same provider, streamlining the model configuration process.
  • Ollama Support: Added support for switching to Ollama models, defaulting to a local endpoint if no specific endpoint is provided.
  • Comprehensive Testing: Included new unit tests to cover the model switching logic, including command parsing, error handling, and integration with the agent's execution.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@greptile-apps
Copy link

greptile-apps bot commented Jan 10, 2026

Greptile Overview

Greptile Summary

This PR successfully implements dynamic model switching during interactive sessions via the /model <provider:model> command, along with /model (no args) to display the current model and /help to show available commands.

Implementation Quality

The implementation is exceptionally well-designed and follows all project coding standards:

Core Functionality:

  • /model <provider:model> switches models at runtime (e.g., /model openai:gpt-4o)
  • /model displays the currently active model (either overridden or default)
  • /help shows all available commands
  • Model state is maintained as local variables (model_override, model_override_string, model_override_config) in _run_interactive_loop and passed to agent.run() on each query

Configuration Inheritance Strategy:

  • Same-provider switches: Uses dataclasses.replace() to inherit all existing config including API keys (e.g., google:gemini-1.5 → google:gemini-2.0)
  • Ollama switches: Creates new config with DEFAULT_API_KEY ("ollama") and LOCAL_MODEL_ENDPOINT
  • Other providers: Creates minimal config; API keys fallback to environment variables (ENV_OPENAI_API_KEY, ENV_GOOGLE_API_KEY)

Error Handling:

  • Validates model format (requires provider:model format with colon)
  • Validates non-empty provider and model_id
  • Provider validation catches missing API keys and displays clear error messages
  • All exceptions are caught and logged; user sees friendly error messages

Code Organization:

  • All strings properly externalized to constants.py, logs.py, exceptions.py
  • Type hints on all functions
  • Follows DRY principles with helper functions _create_model_from_string and _handle_model_command
  • Case-insensitive command matching (uses .lower() for comparison)
  • Whitespace handling with .strip()

Testing

Comprehensive test suite with 16 passing tests covering:

  • Command parsing (with/without args, whitespace, case sensitivity)
  • Model switching (same provider, different provider, multiple switches)
  • Error cases (invalid format, missing provider/model_id, validation failures)
  • Integration with agent loop (model_override parameter passing)
  • Help command functionality
  • Edge cases and boundary conditions

Integration

  • Works in both chat mode and optimization mode (both use _run_interactive_loop)
  • Model override properly passed through to agent.run(model=model_override)
  • No breaking changes to existing functionality
  • Clean separation of concerns between parsing, validation, and execution

Adherence to Standards

✅ PydanticAI-only framework
✅ Heavy Pydantic usage (ModelConfig dataclass)
✅ Type hints everywhere
✅ No hardcoded strings (all in constants/logs/exceptions)
✅ StrEnum for constrained values (Provider enum)
✅ Proper error messages with context
✅ No comments (code is self-documenting)
✅ Match statements for branching logic
✅ Loguru for logging

No Issues Found

After thorough analysis of all changed files, edge cases, error handling, and integration points, no bugs or logic errors were identified. The implementation is production-ready.

Confidence Score: 5/5

  • This PR is safe to merge with high confidence - no critical issues found
  • Score of 5 reflects exceptional implementation quality: comprehensive test coverage (16 tests, all passing), robust error handling at every layer, proper configuration inheritance, clean integration with existing code, full adherence to project coding standards, and thorough validation of all edge cases. The code is production-ready.
  • No files require special attention - all implementations are solid

Important Files Changed

File Analysis

Filename Score Overview
codebase_rag/main.py 5/5 Added /model command for runtime model switching and /help command. New functions _create_model_from_string and _handle_model_command handle model switching logic with proper error handling and config inheritance. Integration into interactive loop is clean.
codebase_rag/providers/base.py 5/5 Added fallback to environment variables for API keys in GoogleProvider and OpenAIProvider. This enables seamless model switching without requiring API keys to be passed explicitly when they're already set in the environment.
codebase_rag/tests/test_model_switching.py 5/5 Comprehensive test suite with 16 tests covering model switching, command handling, error cases, edge cases, and integration with agent loop. Tests are well-structured and cover all major code paths.

Sequence Diagram

sequenceDiagram
    participant User
    participant InteractiveLoop as _run_interactive_loop
    participant ModelCmd as _handle_model_command
    participant CreateModel as _create_model_from_string
    participant Provider as get_provider_from_config
    participant Agent as _run_agent_response_loop

    User->>InteractiveLoop: Input "/model openai:gpt-4o"
    InteractiveLoop->>InteractiveLoop: Parse command (lowercase check)
    InteractiveLoop->>ModelCmd: Call with stripped_question
    
    ModelCmd->>ModelCmd: Extract argument from command
    ModelCmd->>CreateModel: Create model from "openai:gpt-4o"
    
    CreateModel->>CreateModel: Parse provider:model string
    CreateModel->>CreateModel: Check if same provider as current
    
    alt Same Provider
        CreateModel->>CreateModel: Use dataclasses.replace() to inherit config
    else Different Provider (e.g., Ollama)
        CreateModel->>CreateModel: Create new config with defaults
    else Other Provider
        CreateModel->>CreateModel: Create minimal config (api_key from env)
    end
    
    CreateModel->>Provider: get_provider_from_config(config)
    Provider->>Provider: Instantiate provider (checks env vars)
    Provider-->>CreateModel: Return provider instance
    
    CreateModel->>Provider: provider.create_model(model_id)
    Provider->>Provider: validate_config() - check API keys
    
    alt Validation Success
        Provider-->>CreateModel: Return Model instance
        CreateModel-->>ModelCmd: (model, canonical_string, config)
        ModelCmd->>ModelCmd: Log success
        ModelCmd-->>InteractiveLoop: Return new model state
        InteractiveLoop->>User: Display "Model switched to: openai:gpt-4o"
    else Validation Failure
        Provider-->>CreateModel: Raise ValueError
        CreateModel-->>ModelCmd: Propagate exception
        ModelCmd->>ModelCmd: Catch exception, log error
        ModelCmd-->>InteractiveLoop: Return unchanged state
        InteractiveLoop->>User: Display error message
    end
    
    User->>InteractiveLoop: Input user query
    InteractiveLoop->>Agent: Call with model_override
    Agent->>Agent: agent.run(query, model=model_override)
    Agent-->>InteractiveLoop: Return response
    InteractiveLoop->>User: Display response from new model
Loading

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for dynamically switching language models during an interactive session, along with a new /help command. The implementation is solid and includes a good set of unit tests. I've identified a few opportunities for refactoring in main.py to enhance code clarity and maintainability. Additionally, I've found a significant issue in the new tests within test_model_switching.py where they don't fully validate the intended behavior, and I've provided suggestions for a fix.

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for dynamically switching LLM models during an interactive session using the /model command, and adds a /help command. The implementation is solid and includes comprehensive unit tests. The provided suggestions aim to improve user experience by ensuring the model identifier displayed is always in the full provider:model_id format, which avoids ambiguity when providers are inferred. These changes will make the new feature more robust and user-friendly.

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for dynamically switching language models during interactive sessions using the /model command, which significantly enhances the tool's flexibility. The implementation is robust, including new functions for command handling and model instantiation, and is well-supported by a comprehensive new suite of unit tests. My feedback includes a couple of refactoring suggestions to improve code clarity and maintainability.

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for dynamically switching LLM models during an interactive session using the /model command, along with a /help command. The implementation is well-structured, with new logic in main.py and comprehensive unit tests. My review focuses on improving the robustness of this new feature. I've identified a potential crash due to narrow exception handling and a design issue that prevents switching to new providers that require API keys. Addressing these points will make the model switching functionality more reliable and user-friendly.

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for dynamically switching LLM models using the /model command, along with a /help command for user guidance. The implementation is clean, leveraging constants and adding appropriate logging. The new unit tests are comprehensive and cover various scenarios for the new model switching logic. I have one suggestion to improve input validation for model_id to ensure fail-safe behavior, which is crucial for security-sensitive LLM agent features.

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for dynamically switching LLM models during interactive sessions using the /model command. The implementation is well-structured, including a /help command and proper handling of API keys for different providers. The addition of comprehensive unit tests is also a great practice. I have a couple of suggestions to enhance the user experience and ensure feature completeness.

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for dynamically switching LLM models during interactive sessions via the /model command. It also adds a /help command for better user guidance. The implementation is solid, with good separation of concerns, proper handling of API keys, and comprehensive unit tests.

My review includes a few suggestions to improve code clarity and maintainability by renaming a constant to better reflect its purpose. Overall, this is a great contribution.

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces dynamic model switching via a /model command, a /help command for user guidance, and improves API key handling. The changes are well-implemented and include a comprehensive set of unit tests for the new functionality. My review includes a few suggestions to improve code clarity, maintainability, and user experience by refining error handling and help messages.

@vitali87
Copy link
Owner Author

/gemini review

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 files reviewed, 5 comments

Edit Code Review Agent Settings | Greptile

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for dynamically switching LLM models during interactive sessions using the /model command. It also adds a /help command for better user guidance. The implementation is robust, including inheriting API keys when switching models and adding comprehensive unit tests for the new functionality. My main feedback, which remains valid, is a suggestion to refactor the command handling logic in the interactive loop for improved readability and maintainability. Overall, this is a great addition to the project.

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for dynamically switching LLM models during an an interactive session using the /model command. It also adds a /help command for better user experience. The implementation is well-structured, with clear separation of concerns for command handling, model creation, and agent interaction. The new unit tests are comprehensive and cover the new functionality well. I have one suggestion to improve the robustness of the error handling by catching a more specific exception, which will make the code more maintainable and easier to debug.

@vitali87
Copy link
Owner Author

/gemini review

@vitali87
Copy link
Owner Author

@greptile

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces dynamic model switching via the /model command, a /help command for user guidance, and ensures API keys are properly inherited when switching models. The implementation is well-structured, with new constants, log messages, and exception strings to support the new functionality. The core logic is handled in _create_model_from_string and _handle_model_command, which are robust and include error handling. The interactive loop in main.py is updated to process these new commands. Additionally, the provider classes are improved to fall back to environment variables for API keys, which is a good practice. I've identified a minor issue in the command parsing logic and provided a suggestion for improvement. Overall, this is a great feature addition.

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for dynamically switching LLM models during an interactive session using the /model command. The implementation is well-structured, with clear separation of concerns into command handling, model creation, and agent interaction. The addition of the /help command and the thoughtful handling of API key inheritance and environment variable fallbacks significantly improve usability. The new unit tests are comprehensive and cover various scenarios, including edge cases, which is excellent. I have one suggestion for improvement to make the model string parsing more robust.

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a great new feature for dynamically switching LLM models during an interactive session using the /model command. It also adds a /help command to improve user experience. The implementation is well-structured, with new constants, log messages, and exception messages properly defined. The addition of unit tests for the new functionality is also a big plus.

I've found one issue in main.py that prevents using a shorthand for ollama models, and a small redundancy that could be cleaned up. Overall, this is a solid contribution.

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for dynamically switching LLM models and a /help command. The implementation is well-structured and includes comprehensive unit tests. I have two main suggestions for improvement in codebase_rag/main.py: a high-severity issue regarding model configuration inheritance when switching between models of the same provider, and a medium-severity suggestion to improve user experience by allowing shorthand model names for the ollama provider. Addressing these points will make the model switching feature more powerful and intuitive.

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a dynamic model switching feature via the /model command, along with a /help command. The implementation is solid, with good separation of concerns and comprehensive unit tests covering various scenarios and edge cases. The changes to provider classes to fall back to environment variables for API keys are a good improvement for configuration flexibility. I have one suggestion for refactoring in main.py to improve long-term maintainability as more provider-specific logic is added.

@vitali87
Copy link
Owner Author

@greptile

@vitali87
Copy link
Owner Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature for dynamically switching LLM models during an interactive session using the /model command. It also adds a /help command for better user guidance. The implementation is well-structured, with clear separation of concerns for command handling, model creation, and state management. The changes to automatically pick up API keys from environment variables are a great improvement for usability. The addition of comprehensive unit tests for the new functionality is commendable and ensures the feature is robust. I have one suggestion to improve code readability.

@vitali87 vitali87 merged commit 8c85592 into main Jan 10, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

Dynamically switch models during session

2 participants