-
Notifications
You must be signed in to change notification settings - Fork 2.2k
fix: plumb thinking blocks between litellm and gen ai sdk parts #3334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @aneeshgarg, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical integration gap by plumbing 'thinking blocks' from the Claude model, accessed via LiteLLM, into the gen AI SDK. This ensures that the model's internal thought processes, including redacted ones, are properly captured and exposed to users, enhancing transparency and providing richer context for AI interactions. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Response from ADK Triaging Agent Hello @aneeshgarg, thank you for your contribution! Could you please provide the console output from your manual E2E test, as described in the contribution guidelines? This will help reviewers to better understand and verify the fix. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request successfully implements the plumbing for 'thinking blocks' between LiteLLM and the gen ai SDK, which is a great addition for supporting models like Claude. The implementation correctly handles both 'thinking' and 'redacted_thinking' block types. My review includes a couple of suggestions to improve code maintainability by replacing a magic string with a constant and refactoring a section to fix a minor logging bug and reduce code duplication.
c32de5a to
0e94323
Compare
Please ensure you have read the contribution guide before creating a pull request.
Link to Issue or Description of Change
1. Link to an existing issue (if applicable):
2. Or, if no issue exists, describe the change:
If applicable, please follow the issue templates to provide as much detail as
possible.
Problem:
Claude model via LiteLLM return thinking blocks and requires thinking blocks to be passed in the futrue requests when thinking is enabled. These blocks were not plumbed in and were not being shown to the users.
Solution:
Plumb the blocks between LityeLLM and gen sdk parts
Testing Plan
Unit Tests:
Please include a summary of passed
pytestresults.Unit tests do not exist for this whole file. Therefore no unit tests to update.
Manual End-to-End (E2E) Tests:
Ran hellowworld_litellm sample with vertex ai claude model using LiteLLM.
Checklist
Additional context
Add any other context or screenshots about the feature request here.