This repo provides a simple example of how to build a ReAct agent with MCP and local tools.
Specifically this repo builds a Text2Cypher ReAct agent with the Neo4j Cypher MCP Server. The agent also has extended capabilities by exposing local tools for it to use.
A conversational AI agent that connects to a Neo4j Movies database and can answer movie-related questions using natural language. Built with LangGraph's implementation of a ReAct agent, the Neo4j Cypher MCP server, and a custom movie recommendations tool.
- Natural Language to Cypher: Ask questions in plain English and get answers from your Neo4j database
- ReAct Agent Pattern: Uses reasoning and acting loops for complex reasoning
- Schema-Aware: Automatically retrieves and uses database schema for accurate query generation
- Interactive CLI: Chat-based interface for easy interaction
- Python 3.10 or higher
- Neo4j Aura account or local Neo4j instance with Movies database
- OpenAI API key
uvpackage manager (recommended) orpip
-
Install uv (if not already installed):
pip install uv
-
Clone and setup the project:
git clone neo4j-field/text2cypher-react-agent-example cd text2cypher-react-agent-example -
Install dependencies:
uv sync
-
Clone and setup the project:
git clone neo4j-field/text2cypher-react-agent-example cd text2cypher-react-agent-example -
Create and activate virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Copy the example environment file:
cp .env.example .env
-
Edit
.envwith your credentials:OPENAI_API_KEY=your_openai_api_key_here NEO4J_USERNAME=neo4j NEO4J_PASSWORD=your_neo4j_password NEO4J_URI=neo4j+s://your-instance.databases.neo4j.io NEO4J_DATABASE=neo4j
- LangGraph ReAct Agent: Implements reasoning and acting loops for complex queries
- Neo4j Cypher MCP Server: Provides schema introspection and query execution
- Custom Recommendations Tool: Custom tool for movie recs
- Interactive CLI: Command-line chat interface
get_neo4j_schema: Retrieves database schema for informed query writingread_neo4j_cypher: Executes read-only Cypher queries against the databasefind_movie_recommendations: Custom recommendation engine that finds movies liked by users who also enjoyed a target movie
make run-agent-uv
# or
uv run python3 single_file_agent.pymake run-agent
# or
python3 single_file_agent.pyOnce running, you can ask questions like:
- "What movies are in the database?"
- "Tell me about The Matrix"
- "Recommend me some films like The Dark Knight."
To exit the agent, type any of:
exitquitq
This repo contains a comprehensive local evaluation suite with RAGAS metrics for measuring agent performance. The evaluation system can be used and extended to evaluate your own agents.
The evaluation suite includes three RAGAS metrics:
- Rouge Score - Measures the longest common subsequence between the reference answer and agent response using F1 score
- Factual Correctness - Uses an LLM judge to evaluate the factual accuracy of the agent's response against the reference answer with high atomicity and coverage
- Answer Relevancy - Measures how relevant the agent's response is to the user's question using embeddings
- Configure the eval.py file with the LLM name, tools and prompt you would like to use
- Ensure you have populated the
questions.yamlfile with your eval question set - Run evaluations:
make run-eval-uv # Using uv # or make run-eval # Using pip
- The eval results CSV will be saved to
evals/output/eval_benchmark_results_<timestamp>.csv - View and analyze the results with review.ipynb
- Generate a text report:
make generate-report-uv csv-name=<file-name> # Using uv # or make generate-report csv-name=<file-name> # Using pip
NOTE: The evaluation script may take awhile to run depending on the number of questions and length of resulting conversations
The resulting evaluation CSV contains the following columns:
Question & Answer Data:
question_id: str - Unique identifier for the questionquestion: str - The user's input questionexpected_answer: str - Reference answer for comparisonagent_final_answer: Optional[str] - The agent's response
Agent Performance Metrics:
generated_cypher: list[ReadNeo4jCypherToolInput] - All Cypher queries generatedmodel: str - LLM model usedavailable_tools: list[str] - Tools available to the agentcalled_tools: list[str] - Tools actually invokednum_messages: Optional[int] - Total messages in conversationnum_llm_calls: Optional[int] - Number of LLM invocationsnum_tool_calls: Optional[int] - Number of tool invocationsresponse_time: Optional[float] - Time to complete question (seconds)error: Optional[str] - Error message if evaluation failed
RAGAS Quality Metrics:
rouge_f1_score: Optional[float] - Rouge-L F1 scorefactual_correctness_f1_score: Optional[float] - Factual correctness F1 scoreanswer_relevancy_score: Optional[float] - Answer relevancy score
make formatCore Libraries:
langchain- LangChain frameworklangchain-mcp-adapters- MCP (Model Context Protocol) adapterslangchain-openai- OpenAI integrationlanggraph- Graph-based agent frameworkneo4j- Neo4j Python driveropenai- OpenAI API clientpydantic- Data validation
Evaluation Libraries:
ragas- RAG & agent assessment metrics frameworkrouge-score- Text similarity metricspandas- Data analysis and CSV handling
Development:
ruff- Code formatting and linting
Connection Issues:
- Verify your Neo4j credentials in
.env - Ensure your Neo4j instance is running and accessible
OpenAI Issues:
- Verify your OpenAI API key is valid
- Check your API usage limits
MCP Server Issues:
- Ensure
uvxis available in your PATH - The agent automatically installs
[email protected]via uvx
Python Issues:
- Ensure Python 3.10+ is installed
- Try recreating your virtual environment if using pip
This project is provided as an example for educational and demonstration purposes.