A comprehensive Model Context Protocol (MCP) setup that provides powerful tools for research, file management, and web content fetching. This project integrates multiple MCP servers to enhance your AI assistant capabilities.
- 📚 Research Tool: Search and manage academic papers from arXiv
- 📁 Filesystem Tool: Browse, read, and manage project files
- 🌐 Fetch Tool: Retrieve content from websites and APIs
- 🤖 Multi-LLM Support: Works with Claude, Gemini, and other AI models
- 💾 Local Storage: Automatically saves research data organized by topics
- Python 3.13 or higher
uvpackage manager (recommended) orpip- API keys for your chosen LLM providers
- Claude Desktop (for MCP integration)
git clone <your-repo-url>
cd mcp_project# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create virtual environment and install dependencies
uv syncCreate a .env file in your project root:
ANTHROPIC_API_KEY=your_anthropic_api_key_hereCreate or update your Claude Desktop configuration file:
Location: ~/Library/Application Support/Claude/claude_desktop_config.json (macOS)
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"."
],
"cwd": "/path/to/your/mcp_project"
},
"research": {
"command": "/path/to/your/mcp_project/.venv/bin/python",
"args": [
"/path/to/your/mcp_project/research_server.py"
],
"cwd": "/path/to/your/mcp_project"
},
"fetch": {
"command": "/path/to/your/.local/bin/uvx",
"args": ["mcp-server-fetch"],
"cwd": "/path/to/your/mcp_project"
}
}
}Important: Replace /path/to/your/mcp_project with your actual project path.
Restart Claude Desktop completely to load the new configuration.
Search for Papers:
Search for 5 papers about machine learning
Get Paper Details:
Show me information about paper ID 1234.5678
Browse Saved Papers:
What papers do I have saved on physics?
Browse Files:
List all files in my project directory
Read Files:
Show me the contents of research_server.py
Create Files:
Create a new Python script for data analysis
Get Web Content:
Fetch the latest Python documentation
API Calls:
Get current weather data from an API
| Tool | Description | Parameters |
|---|---|---|
search_papers |
Search arXiv for papers | topic, max_results |
extract_info |
Get paper details | paper_id |
get_available_folders |
List saved topics | None |
| Tool | Description |
|---|---|
read_file |
Read file contents |
write_file |
Write to files |
list_dir |
List directory contents |
delete_file |
Delete files |
| Tool | Description |
|---|---|
fetch |
Fetch content from URLs |
mcp_project/
├── research_server.py # Main research MCP server
├── mcp_chatbot_L7.py # Chatbot with LLM integration
├── pyproject.toml # Project configuration
├── requirements.txt # Python dependencies
├── uv.lock # Dependency lock file
├── papers/ # Research data storage
│ └── [topic_name]/ # Organized by topic
│ └── papers_info.json # Paper metadata
├── .env # Environment variables
└── README.md # This file
The research server automatically:
- Creates topic-based directories in
papers/ - Saves paper metadata as JSON files
- Provides search and retrieval functions
- Integrates with arXiv API
The filesystem server:
- Operates within your project directory
- Provides full file management capabilities
- Uses relative paths for portability
The fetch server:
- Handles web requests and API calls
- Supports custom user agents
- Can ignore robots.txt restrictions
Screenshot showing the MCP Research Assistant successfully running with all tools working
- Edit
research_server.pyto add new functions - Use the
@mcp.tool()decorator - Test with MCP Inspector
- Update documentation
- Edit
mcp_chatbot_L7.py - Modify tool descriptions and parameters
- Add custom prompts and resources
