📋 Split large responses into
When we perform Cypher querying against a DB, we should always apply a LIMIT X clause to avoid exceeding the LLM context limit. In cases when too many results are returned by the GraphDB, we could either limit the total amount of barches OR split the queries into multiple batches and process each of them with LLM separately (using the original user request), and then merge them together when returning a reply.
🎯 Acceptance Criteria
- Implement result truncation at the query tool level (e.g., limit to first 100 results)
- Add warning messages when results are truncated
- Update Cypher prompts to automatically add LIMIT clauses for broad queries
- ADD split logic with reduced result sets when hitting limits
🔗 Related Files/Components
- File:
codebase_rag/services/llm.py
- File:
codebase_rag/prompts.py
- File:
codebase_rag/config.py
📝 Additional Context
Any additional information, screenshots, or context
🚦 Priority
📊 Estimated Effort