Conversation
- Added LLMInteractionLogger for logging interactions with the LLM, configurable via app settings. - Introduced max_steps and logging options in app settings. - Updated AIQueryHandler to support multi-step query processing and logging of translation and response generation. - Enhanced prompts to include request context for better query resolution. - Improved error handling and logging throughout the AI query service. - Added request context retrieval in SchemaAnalyzer for better context awareness in queries.
…andling - Added `retry_on_empty` setting to app configuration to enable retrying queries that return no results. - Introduced `translate_retry_on_empty` method in `AIQueryHandler` to generate alternative queries or confirm no results. - Enhanced `LLMInteractionLogger` to log retry attempts and responses. - Updated `build_retry_on_empty_prompt` method in `PromptBuilder` to create prompts for suggesting alternative queries. - Modified `AIQueryService` to utilize the new retry mechanism when no results are found.
There was a problem hiding this comment.
Pull request overview
This pull request adds MCP (Model Context Protocol) support to Guillotina as a new contrib module. MCP enables AI assistants and LLMs to interact with Guillotina content through a standardized protocol. The implementation provides both in-process and HTTP-based backends for querying and retrieving content.
Changes:
- Added a new
guillotina.contrib.mcpmodule with services, backends, tools, and command support for MCP protocol integration - Implemented in-process and HTTP backends for search, count, get_content, and list_children operations
- Added authentication and token management services for MCP clients
- Included basic test coverage for the MCP functionality
Reviewed changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 21 comments.
Show a summary per file
| File | Description |
|---|---|
| guillotina/tests/test_mcp.py | Adds basic tests for MCP service registration, backend context requirements, authentication, and token generation |
| guillotina/contrib/mcp/tools.py | Defines MCP tool functions (search, count, get_content, list_children) and registers them with the MCP server |
| guillotina/contrib/mcp/services.py | Implements the @mcp endpoint service and @mcp-token service for JWT token generation |
| guillotina/contrib/mcp/server.py | Provides MCP server initialization and ASGI app creation with global state management |
| guillotina/contrib/mcp/permissions.py | Defines two new permissions for MCP usage and token issuance, granted to authenticated users |
| guillotina/contrib/mcp/interfaces.py | Defines IMCPDescriptionExtras interface for extending tool descriptions |
| guillotina/contrib/mcp/command.py | Implements CLI command for running standalone MCP server with HTTP backend |
| guillotina/contrib/mcp/backend.py | Implements InProcessBackend and HttpBackend classes for content operations with context management |
| guillotina/contrib/mcp/init.py | Module initialization with default settings and configuration scanning |
| Makefile | Adds format and tests targets for code formatting and test execution |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Added new MCP documentation to describe the Model Context Protocol integration. - Implemented the `@mcp` and `@chat` endpoints for LLM interactions. - Refactored tool definitions to support additional chat functionalities. - Removed deprecated chat tools and updated related service logic for better performance.
…txt for compatibility
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 14 out of 14 changed files in this pull request and generated 6 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| for tc in tool_calls: | ||
| tc_id = getattr(tc, "id", None) or (tc.get("id") if isinstance(tc, dict) else "") | ||
| fn = getattr(tc, "function", None) or (tc.get("function") if isinstance(tc, dict) else {}) | ||
| name = getattr(fn, "name", None) if fn else None | ||
| if name is None and isinstance(fn, dict): | ||
| name = fn.get("name", "") | ||
| raw_args = getattr(fn, "arguments", None) if fn else None | ||
| if raw_args is None and isinstance(fn, dict): | ||
| raw_args = fn.get("arguments", "{}") | ||
| tool_calls_list.append( | ||
| { | ||
| "id": tc_id, | ||
| "type": "function", | ||
| "function": {"name": name or "", "arguments": raw_args or "{}"}, | ||
| } | ||
| ) | ||
| assistant_msg["tool_calls"] = tool_calls_list | ||
| messages.append(assistant_msg) | ||
| for tc in tool_calls: | ||
| tc_id = getattr(tc, "id", None) or (tc.get("id") if isinstance(tc, dict) else "") | ||
| fn = getattr(tc, "function", None) or (tc.get("function") if isinstance(tc, dict) else {}) | ||
| name = getattr(fn, "name", None) if fn else None | ||
| if name is None and isinstance(fn, dict): | ||
| name = fn.get("name", "") | ||
| raw_args = getattr(fn, "arguments", None) if fn else None | ||
| if raw_args is None and isinstance(fn, dict): | ||
| raw_args = fn.get("arguments", "{}") | ||
| name = name or "" | ||
| raw_args = raw_args or "{}" | ||
| try: | ||
| arguments = json.loads(raw_args) if isinstance(raw_args, str) else (raw_args or {}) | ||
| except json.JSONDecodeError: | ||
| arguments = {} | ||
| try: | ||
| result = await _execute_tool(backend, name, arguments) | ||
| except Exception as e: | ||
| logger.exception("MCP chat tool %s failed", name) | ||
| result = {"error": str(e)} | ||
| messages.append({"role": "tool", "tool_call_id": tc_id, "content": json.dumps(result)}) |
There was a problem hiding this comment.
There is significant code duplication in parsing tool calls. Lines 141-156 and 159-169 contain nearly identical logic for extracting tc_id, function name, and arguments. This makes the code harder to maintain. Consider extracting this logic into a helper function to reduce duplication and improve maintainability.
| litellm = __import__("litellm", fromlist=["acompletion"]) | ||
| acompletion = getattr(litellm, "acompletion") |
There was a problem hiding this comment.
The chat service dynamically imports litellm without verifying it's installed first. While pytest.importorskip is used in tests, the actual service code should handle ImportError gracefully and return a clear error message if litellm is not available. This would provide better user experience when litellm is not installed but chat is enabled.
| litellm = __import__("litellm", fromlist=["acompletion"]) | |
| acompletion = getattr(litellm, "acompletion") | |
| try: | |
| litellm = __import__("litellm") | |
| except ImportError: | |
| raise HTTPPreconditionFailed( | |
| content={ | |
| "reason": "litellm is not installed", | |
| "hint": "Install the 'litellm' package to use the @chat endpoint.", | |
| } | |
| ) | |
| acompletion = getattr(litellm, "acompletion", None) | |
| if acompletion is None or not callable(acompletion): | |
| raise HTTPPreconditionFailed( | |
| content={ | |
| "reason": "litellm.acompletion is unavailable", | |
| "hint": "Ensure a compatible version of 'litellm' providing 'acompletion' is installed.", | |
| } | |
| ) |
…ved compatibility and performance
… annotation and adjust test assertions for consistent status checks
…bled and disabled states
…ces, enhancing utility integration in the MCP module
…encies, and enhance CI workflow to support Python 3.8 and 3.9
… argon2-cffi based on Python version compatibility
…o accommodate Python version compatibility
…proved Python version compatibility
…rgon2-cffi, pyjwt, and typing_extensions in requirements.txt and setup.py to enhance compatibility across Python versions
…dling in chat tools
…s session handling and cleanup process
No description provided.