Use native langchain MCP tools and messages#51
Conversation
Signed-off-by: Theo Brigitte <[email protected]>
Signed-off-by: Theo Brigitte <[email protected]>
Signed-off-by: Theo Brigitte <[email protected]>
Signed-off-by: Theo Brigitte <[email protected]>
Signed-off-by: Theo Brigitte <[email protected]>
|
Error: Could not generate a valid Mermaid diagram after multiple attempts. Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (1.64.8)Error: you are using a configuration file for golangci-lint v2 with golangci-lint v1: please use golangci-lint v2 ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
cmd/main.go (1)
25-25: Remove or clarify the contradictory comment.This comment states that "internal/mcp is no longer needed here" but then says "This comment is now incorrect". Since the
internal/mcppackage is clearly still being used throughout the file (e.g.,mcp.Client,mcp.NewClient), this comment should be removed to avoid confusion.- // internal/mcp is no longer needed here - This comment is now incorrect
🧹 Nitpick comments (3)
internal/handlers/llm_mcp_bridge.go (1)
123-152: Well-structured refactoring with improved clarity.The change from parsing raw LLM responses to accepting explicit tool calls is a significant improvement. The function is now more focused and testable.
Consider simplifying the error return pattern for consistency:
- return errorMessage, nil + return errorMessage, errThis would allow callers to distinguish between tool execution failures and other errors if needed.
internal/llm/langchain.go (1)
18-18: Remove unused type alias.The
Messagetype alias forllms.MessageContentis not used anywhere in this file. Consider removing it to avoid confusion.-type Message llms.MessageContent -internal/slack/client.go (1)
236-236: Remove commented out code.This commented line appears to be from the previous implementation and is no longer needed since tools are now passed directly via options.
- // Generate the system prompt with tool information - //toolPrompt := c.generateToolPrompt()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
cmd/main.go(9 hunks)internal/common/types.go(1 hunks)internal/handlers/llm_mcp_bridge.go(2 hunks)internal/llm/langchain.go(3 hunks)internal/llm/provider.go(2 hunks)internal/llm/registry.go(2 hunks)internal/slack/client.go(8 hunks)
🔇 Additional comments (12)
internal/common/types.go (1)
8-8: Consider retaining JSON tags for ServerName field.The removal of JSON tags from the
ServerNamefield could affect serialization if this struct is used in API responses or configuration files. Unless this struct is now purely internal, consider keeping the JSON tags for consistency.#!/bin/bash # Description: Check if ToolInfo is used in any API or serialization context # Search for ToolInfo usage in files that might serialize it rg -A 5 "ToolInfo" --type go | grep -E "(json\.|Marshal|Unmarshal|Encode|Decode)" # Check if ToolInfo appears in any API handlers or HTTP-related code rg -A 5 "ToolInfo" --type go | grep -E "(http\.|Handler|Response|Request)"internal/llm/registry.go (1)
190-213: Consistent API updates for structured message handling.The method signatures have been properly updated to use
llms.MessageContentandllms.ContentChoice, maintaining consistency with the provider interface changes throughout the codebase.internal/llm/provider.go (2)
81-83: Good use of struct embedding for RequestMessage.The embedding of
llms.MessageContentprovides a clean way to inherit its functionality while maintaining the ability to extend with custom fields in the future if needed.
86-92: Tools field properly integrated into ProviderOptions.The addition of the
Toolsfield enables function calling capabilities, aligning well with the PR's objective to leverage native langchain MCP tools.internal/llm/langchain.go (2)
113-133: Clean refactoring of GenerateCompletion method.The direct use of
p.llm.GenerateContentwith proper error handling for empty choices is a significant improvement. The method is now more straightforward and maintainable.
204-210: Proper integration of tools support.The addition of tools to call options is well-implemented with appropriate debug logging.
Note: The comment on line 206 mentions potential conversion but doesn't implement it. If tools require format conversion for specific LLM providers, consider implementing that logic or updating the comment to clarify why it's not needed.
cmd/main.go (2)
56-59: LGTM! Clean integration of LLM tools.The function signatures have been consistently updated to handle
llmsToolsthroughout the initialization flow. This aligns well with the PR objective of using native langchain MCP tools.Also applies to: 128-133, 144-145, 165-165, 449-449
255-279: Well-implemented tool conversion logic.The conversion from MCP tool definitions to
llms.Toolstructures is properly implemented. The mapping preserves the JSON schema structure for parameters while creating the appropriate function definitions.Consider adding validation to ensure the InputSchema type is a valid JSON Schema type:
#!/bin/bash # Description: Check if there are any validation or type constraints for InputSchema.Type in the codebase # Search for InputSchema type definitions or validations ast-grep --pattern 'type InputSchema struct { $$$ }' # Also check for any existing validation logic rg -A 5 'InputSchema.*Type'internal/slack/client.go (4)
35-40: Good refactoring to use native langchain types.The migration from custom message types to
llms.MessageContentaligns with the PR objectives. The addition oftoolCallsLimit(set to 25) is a good safety measure to prevent infinite tool call loops.Also applies to: 44-44, 108-112
185-213: Clean implementation of message history with proper role handling.The refactored
addToHistoryandgetContextFromHistorymethods properly handle the newllms.MessageContentstructure with explicit roles and content parts. This aligns well with langchain's message format conventions.
263-269: Good addition of helper method for consistent response handling.The
answermethod provides a clean abstraction for sending responses and handles the edge case of empty LLM responses gracefully.
273-347: Excellent refactoring of tool call processing logic.The new iterative approach is much cleaner and more robust:
- Proper loop control with
toolCallsLimitprevents infinite loops- Maintains full conversation history including tool responses
- Good error handling for JSON unmarshaling and tool execution
- Natural flow that lets the LLM generate final responses after tool calls
This is a significant improvement over the previous manual prompt construction approach.
|
This seems like it overlaps with #42 a bit, I think we should keep the bridge (for now, anyway) alongside the native langchain tools, since not all LLMs support tools (yet) |
AFAIK OpenAI and Anthropic do support tools, the bridge could be kept only for the Ollama models who do not support tools yet. |
This PR improves how tools and messages are sent to LLM.
llms.WithToolswhich simplify processing responses with tool call and also get rid of all the formatting and prompting for tools context sent to the LLM.llms.MessageContentto standardize how messages context/history are sent to the LLM, this allow to better define each message type (human, ai, or tool reponse).processLLMResponseAndReplynow runs a loop to process potential multiple LLM tool calls (max 25 calls), feeding back the tool response each time. It stops when no more tool call are made.Summary by CodeRabbit
New Features
Refactor
Bug Fixes