Skip to content
This repository was archived by the owner on Apr 12, 2026. It is now read-only.

Use native langchain MCP tools and messages#51

Open
TheoBrigitte wants to merge 5 commits into
tuannvm:mainfrom
TheoBrigitte:mcp-tools
Open

Use native langchain MCP tools and messages#51
TheoBrigitte wants to merge 5 commits into
tuannvm:mainfrom
TheoBrigitte:mcp-tools

Conversation

@TheoBrigitte
Copy link
Copy Markdown
Contributor

@TheoBrigitte TheoBrigitte commented Jun 24, 2025

This PR improves how tools and messages are sent to LLM.

  • Tool are now sent using the llms.WithTools which simplify processing responses with tool call and also get rid of all the formatting and prompting for tools context sent to the LLM.
  • Message are now using llms.MessageContent to standardize how messages context/history are sent to the LLM, this allow to better define each message type (human, ai, or tool reponse).
  • processLLMResponseAndReply now runs a loop to process potential multiple LLM tool calls (max 25 calls), feeding back the tool response each time. It stops when no more tool call are made.

Summary by CodeRabbit

  • New Features

    • Enhanced Slack integration to support advanced tool usage with large language models (LLMs), allowing more dynamic and interactive responses.
    • Improved handling of conversation history and tool calls for richer, context-aware interactions.
  • Refactor

    • Streamlined message and tool call processing to use structured message formats, enabling more reliable and flexible LLM interactions.
    • Unified and modernized the way tools are discovered and made available to LLMs within the app.
  • Bug Fixes

    • Improved error handling and messaging for tool execution failures during conversations.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Jun 24, 2025

Error: Could not generate a valid Mermaid diagram after multiple attempts.

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (1.64.8)

Error: you are using a configuration file for golangci-lint v2 with golangci-lint v1: please use golangci-lint v2
Failed executing command with error: you are using a configuration file for golangci-lint v2 with golangci-lint v1: please use golangci-lint v2

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
cmd/main.go (1)

25-25: Remove or clarify the contradictory comment.

This comment states that "internal/mcp is no longer needed here" but then says "This comment is now incorrect". Since the internal/mcp package is clearly still being used throughout the file (e.g., mcp.Client, mcp.NewClient), this comment should be removed to avoid confusion.

-	// internal/mcp is no longer needed here - This comment is now incorrect
🧹 Nitpick comments (3)
internal/handlers/llm_mcp_bridge.go (1)

123-152: Well-structured refactoring with improved clarity.

The change from parsing raw LLM responses to accepting explicit tool calls is a significant improvement. The function is now more focused and testable.

Consider simplifying the error return pattern for consistency:

-		return errorMessage, nil
+		return errorMessage, err

This would allow callers to distinguish between tool execution failures and other errors if needed.

internal/llm/langchain.go (1)

18-18: Remove unused type alias.

The Message type alias for llms.MessageContent is not used anywhere in this file. Consider removing it to avoid confusion.

-type Message llms.MessageContent
-
internal/slack/client.go (1)

236-236: Remove commented out code.

This commented line appears to be from the previous implementation and is no longer needed since tools are now passed directly via options.

-	// Generate the system prompt with tool information
-	//toolPrompt := c.generateToolPrompt()
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8e65ac3 and 8ef5fd9.

📒 Files selected for processing (7)
  • cmd/main.go (9 hunks)
  • internal/common/types.go (1 hunks)
  • internal/handlers/llm_mcp_bridge.go (2 hunks)
  • internal/llm/langchain.go (3 hunks)
  • internal/llm/provider.go (2 hunks)
  • internal/llm/registry.go (2 hunks)
  • internal/slack/client.go (8 hunks)
🔇 Additional comments (12)
internal/common/types.go (1)

8-8: Consider retaining JSON tags for ServerName field.

The removal of JSON tags from the ServerName field could affect serialization if this struct is used in API responses or configuration files. Unless this struct is now purely internal, consider keeping the JSON tags for consistency.

#!/bin/bash
# Description: Check if ToolInfo is used in any API or serialization context

# Search for ToolInfo usage in files that might serialize it
rg -A 5 "ToolInfo" --type go | grep -E "(json\.|Marshal|Unmarshal|Encode|Decode)"

# Check if ToolInfo appears in any API handlers or HTTP-related code
rg -A 5 "ToolInfo" --type go | grep -E "(http\.|Handler|Response|Request)"
internal/llm/registry.go (1)

190-213: Consistent API updates for structured message handling.

The method signatures have been properly updated to use llms.MessageContent and llms.ContentChoice, maintaining consistency with the provider interface changes throughout the codebase.

internal/llm/provider.go (2)

81-83: Good use of struct embedding for RequestMessage.

The embedding of llms.MessageContent provides a clean way to inherit its functionality while maintaining the ability to extend with custom fields in the future if needed.


86-92: Tools field properly integrated into ProviderOptions.

The addition of the Tools field enables function calling capabilities, aligning well with the PR's objective to leverage native langchain MCP tools.

internal/llm/langchain.go (2)

113-133: Clean refactoring of GenerateCompletion method.

The direct use of p.llm.GenerateContent with proper error handling for empty choices is a significant improvement. The method is now more straightforward and maintainable.


204-210: Proper integration of tools support.

The addition of tools to call options is well-implemented with appropriate debug logging.

Note: The comment on line 206 mentions potential conversion but doesn't implement it. If tools require format conversion for specific LLM providers, consider implementing that logic or updating the comment to clarify why it's not needed.

cmd/main.go (2)

56-59: LGTM! Clean integration of LLM tools.

The function signatures have been consistently updated to handle llmsTools throughout the initialization flow. This aligns well with the PR objective of using native langchain MCP tools.

Also applies to: 128-133, 144-145, 165-165, 449-449


255-279: Well-implemented tool conversion logic.

The conversion from MCP tool definitions to llms.Tool structures is properly implemented. The mapping preserves the JSON schema structure for parameters while creating the appropriate function definitions.

Consider adding validation to ensure the InputSchema type is a valid JSON Schema type:

#!/bin/bash
# Description: Check if there are any validation or type constraints for InputSchema.Type in the codebase

# Search for InputSchema type definitions or validations
ast-grep --pattern 'type InputSchema struct {
  $$$
}'

# Also check for any existing validation logic
rg -A 5 'InputSchema.*Type'
internal/slack/client.go (4)

35-40: Good refactoring to use native langchain types.

The migration from custom message types to llms.MessageContent aligns with the PR objectives. The addition of toolCallsLimit (set to 25) is a good safety measure to prevent infinite tool call loops.

Also applies to: 44-44, 108-112


185-213: Clean implementation of message history with proper role handling.

The refactored addToHistory and getContextFromHistory methods properly handle the new llms.MessageContent structure with explicit roles and content parts. This aligns well with langchain's message format conventions.


263-269: Good addition of helper method for consistent response handling.

The answer method provides a clean abstraction for sending responses and handles the edge case of empty LLM responses gracefully.


273-347: Excellent refactoring of tool call processing logic.

The new iterative approach is much cleaner and more robust:

  • Proper loop control with toolCallsLimit prevents infinite loops
  • Maintains full conversation history including tool responses
  • Good error handling for JSON unmarshaling and tool execution
  • Natural flow that lets the LLM generate final responses after tool calls

This is a significant improvement over the previous manual prompt construction approach.

@sthomson-wyn
Copy link
Copy Markdown
Contributor

This seems like it overlaps with #42 a bit, I think we should keep the bridge (for now, anyway) alongside the native langchain tools, since not all LLMs support tools (yet)

@TheoBrigitte
Copy link
Copy Markdown
Contributor Author

This seems like it overlaps with #42 a bit, I think we should keep the bridge (for now, anyway) alongside the native langchain tools, since not all LLMs support tools (yet)

AFAIK OpenAI and Anthropic do support tools, the bridge could be kept only for the Ollama models who do not support tools yet.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants