Watched how we used GPT-5 and Claude Code with nano-agents here.
What? A MCP Server for experimental, small scale engineering agents with multi-provider LLM support.
Why? To test and compare Agentic Capabilities of Cloud and Local LLMs across Performance, Speed, and Cost.
"It's not about a single prompt call anymore. It's about how well your agent chains together multiple tools to accomplish real engineering results on your behalf." - From our evaluation
Multi-Model Evaluation Flow - Watch 9 models (GPT-5, Claude Opus, Local GPT-OSS) running in parallel on the same M4 Max:

Model Comparison: GPT-5 vs Local Models - Surprising results: GPT-OSS 20B/120B running on-device with $0.00 cost:

- Surprising Winners: GPT-5 Nano/Mini often outperform larger models when factoring in speed and cost
- Local Revolution: GPT-OSS 20B/120B models complete real agentic coding tasks on M4 Max (128GB RAM)
- Cost Reality Check: Claude Opus 4.1 is extraordinarily expensive - performance isn't everything
- The Trade-off Triangle: Performance vs Speed vs Cost - you don't always need the most expensive model
- Install Astral UV
- Setup Claude Code
- Setup Ollama
- Get your OpenAI API key and Anthropic API key
- Setup dotenv
cp ./.env.sample ./.envand fill out variablescp ./apps/nano_agent_mcp_server/.env.sample ./apps/nano_agent_mcp_server/.envand fill out variables
- Clone the repository
git clone https://github.com/disler/nano-agent
- Global Install
nano-agentto expose it for Claude Code (any mcp client)cd nano-agent/apps/nano_agent_mcp_server./scripts/install.shuv tool install -e .
- cp
.mcp.json.sampleto.mcp.jsonto usenano-agent - You should end up with a
.mcp.jsonfile that looks like this:
{
"mcpServers": {
"nano-agent": {
"command": "nano-agent",
"args": []
}
}
}- You can also test without installing
nano-agentglobally by running it this directory with
{
"mcpServers": {
"nano-agent": {
"command": "uv",
"args": ["--directory", "apps/nano_agent_mcp_server", "run", "nano-agent"]
}
}
}Now you can follow the Nano Agent Interaction section below to test out the nano agent.
There are three ways to interact with the nano agent.
- Nano Agent Through the CLI (
uv run nano-cli run)- Great for understanding agent capabilities
- Nano Agent Through Claude Code or any MCP client (
.mcp.jsonor equivalent configuration)- Great for delegating work and scaling up compute in the field
- Nano Agent Through the Higher Order Prompt (HOP) and Lower Order Prompt (LOP) pattern to test and compare models across providers and models.
Remember, when running directly your current directory is where ever you run uv run nano-cli run from.
cd apps/nano_agent_mcp_server
# Test tools without API
uv run nano-cli test-tools
# Run with different models (provider auto-detected from model name)
uv run nano-cli run "List all Python files in the current directory" # gpt-5-mini (default)
uv run nano-cli run "Create a hello world script in python" --model gpt-5-nano
uv run nano-cli run "Summarize the README.md" --model gpt-5
# Test Anthropic models (requires ANTHROPIC_API_KEY)
uv run nano-cli run "Hello" --model claude-3-haiku-20240307 --provider anthropic
uv run nano-cli run "Hello" --model claude-sonnet-4-20250514 --provider anthropic
uv run nano-cli run "Hello" --model claude-opus-4-20250514 --provider anthropic
uv run nano-cli run "Hello" --model claude-opus-4-1-20250805 --provider anthropic
# Test local Ollama models (requires ollama service) (be sure to install the model first with `ollama pull gpt-oss:20b`)
uv run nano-cli run "List files" --model gpt-oss:20b --provider ollama
uv run nano-cli run "List files and count the total number of files and directories" --model gpt-oss:120b --provider ollama
# Verbose mode (shows token usage)
uv run nano-cli run "Create and edit a test file" --verbosemcp nano-agent: prompt_nano_agent "Create a hello world script in python" --model gpt-5
mcp nano-agent: prompt_nano_agent "Summarize the README.md" --model claude-opus-4-1-20250805 --provider anthropic
mcp nano-agent: prompt_nano_agent "Read the first 10 lines and last 10 lines of the README.md" --verbose
etc...
@agent-nano-agent-gpt-5-mini "Create a hello world script in python"
@agent-nano-agent-gpt-5 "Summarize the <file name>"
@agent-nano-agent-claude-opus-4-1 "<insert agentic prompt here>"
@agent-nano-agent-gpt-oss-20b "<insert agentic prompt here>"
@agent-nano-agent-gpt-oss-120b "<insert agentic prompt here>"
@agent-nano-agent-claude-sonnet-4 "<insert agentic prompt here>"
@agent-nano-agent-claude-3-haiku "<insert agentic prompt here>"
In Claude Code call
/perf:hop_evaluate_nano_agents .claude/commands/perf/lop_eval_1__dummy_test.md
/perf:hop_evaluate_nano_agents .claude/commands/perf/lop_eval_2__basic_read_test.md
/perf:hop_evaluate_nano_agents .claude/commands/perf/lop_eval_3__file_operations_test.md
/perf:hop_evaluate_nano_agents .claude/commands/perf/lop_eval_4__code_analysis_test.md
/perf:hop_evaluate_nano_agents .claude/commands/perf/lop_eval_5__complex_engineering_test.md
The HOP/LOP pattern enables systematic parallel evaluation of multiple models:
- HOP (Higher Order Prompt): The orchestrator that reads test files, delegates to agents in parallel, and grades results
- LOP (Lower Order Prompt): Individual test definitions with prompts, expected outputs, and grading rubrics
- Execution Flow: HOP β reads LOP β calls 9 agents simultaneously β collects results β generates comparison tables
Example: When you run /perf:hop_evaluate_nano_agents lop_eval_3__file_operations_test.md:
- HOP reads the test specification from the LOP file
- Extracts the prompt and list of agents to test
- Executes all agents in parallel (GPT-5, Claude, Local models)
- Each agent runs in isolation via the nano-agent MCP server
- Results are graded on Performance, Speed, and Cost
- Output shows ranked comparison with surprising results (e.g., Claude-3-haiku often beats expensive models)
This architecture ensures fair comparison by using the same OpenAI Agent SDK for all providers, creating a true apples-to-apples benchmark.
- π€ Multi-Provider Support: Seamlessly switch between OpenAI (GPT-5), Anthropic (Claude), and Ollama (local models)
- π§ File System Operations: Read, write, edit, and analyze files autonomously
- ποΈ Nested Agent Architecture: MCP server spawns internal agents for task execution
- π― Unified Interface: All providers use the same OpenAI SDK for consistency
- π¦ Experiment Ready: Decent testing, error handling, and token tracking
- π Easy Integration: Works with Claude Desktop, or as a CLI
Feel free to add/remove/improve tools as you see fit.
Nano-Agent tools are stored in nano_agent_tools.py.
Tools are:
read_file- Read file contentslist_directory- List directory contents (defaults to current working directory)write_file- Create or overwrite filesget_file_info- Get file metadata (size, dates, type)edit_file- Edit files by replacing exact text matches
nano-agent/
βββ apps/ # β οΈ ALL APPLICATION CODE GOES HERE
β βββ nano_agent_mcp_server/ # Main MCP server application
β βββ src/ # Source code
β β βββ nano_agent/ # Main package
β β βββ modules/ # Core modules
β β β βββ constants.py # Model/provider constants & defaults
β β β βββ data_types.py # Pydantic models & type definitions
β β β βββ files.py # File system operations
β β β βββ nano_agent.py # Main agent execution logic
β β β βββ nano_agent_tools.py # Internal agent tool implementations
β β β βββ provider_config.py # Multi-provider configuration
β β β βββ token_tracking.py # Token usage & cost tracking
β β β βββ typing_fix.py # Type compatibility fixes
β β βββ __main__.py # MCP server entry point
β β βββ cli.py # CLI interface (nano-cli)
β βββ tests/ # Test suite
β β βββ nano_agent/ # Unit tests
β β βββ isolated/ # Provider integration tests
β βββ scripts/ # Installation & utility scripts
β βββ pyproject.toml # Project configuration & dependencies
β βββ uv.lock # Locked dependency versions
β βββ .env.sample # Environment variables template
βββ .claude/ # Claude Code configuration
β βββ agents/ # Sub-agent configurations (9 models)
β β βββ nano-agent-gpt-5-nano.md # OpenAI GPT-5 Nano
β β βββ nano-agent-gpt-5-mini.md # OpenAI GPT-5 Mini (default)
β β βββ nano-agent-gpt-5.md # OpenAI GPT-5
β β βββ nano-agent-claude-opus-4-1.md # Claude Opus 4.1
β β βββ nano-agent-claude-opus-4.md # Claude Opus 4
β β βββ nano-agent-claude-sonnet-4.md # Claude Sonnet 4
β β βββ nano-agent-claude-3-haiku.md # Claude 3 Haiku
β β βββ nano-agent-gpt-oss-20b.md # Ollama 20B model
β β βββ nano-agent-gpt-oss-120b.md # Ollama 120B model
β β βββ hello-world.md # Simple greeting agent
β βββ commands/ # Claude Code commands
β β βββ perf/ # Performance evaluation commands
β β β βββ hop_evaluate_nano_agents.md # Higher Order Prompt orchestrator
β β β βββ lop_eval_1__dummy_test.md # Simple Q&A test
β β β βββ lop_eval_2__basic_read_test.md # File reading test
β β β βββ lop_eval_3__file_operations_test.md # Complex I/O test
β β β βββ lop_eval_4__code_analysis_test.md # Code understanding
β β β βββ lop_eval_5__complex_engineering_test.md # Full project test
β β βββ convert_paths_absolute.md # Convert to absolute paths
β β βββ convert_paths_relative.md # Convert to relative paths
β β βββ create_worktree.md # Git worktree management
β β βββ plan.md # Planning template
β β βββ prime.md # Codebase understanding
β β βββ build.md # Build commands
β βββ hooks/ # Development hooks
β βββ settings.json # Portable settings (relative paths)
β βββ settings.local.json # Local settings (absolute paths)
βββ eval_results_1_dummy_test.md # Q&A test benchmark results
βββ eval_results_2_basic_read_test.md # File reading benchmark results
βββ eval_results_3_file_operations_test.md # I/O benchmark results
βββ eval_results_4_code_analysis_test.md # Code analysis benchmark results
βββ eval_results_5_complex_engineering_test.md # Project creation benchmark results
βββ images/ # Documentation images
β βββ nano-agent.png # Project logo/diagram
βββ app_docs/ # Application-specific documentation
βββ ai_docs/ # AI/LLM documentation & guides
β βββ python_uv_mcp_server_cookbook.md # MCP server development guide
β βββ openai_agent_sdk_*.md # OpenAI SDK documentation
β βββ anthropic_openai_compat.md # Anthropic compatibility guide
β βββ ollama_openai_compat.md # Ollama compatibility guide
β βββ new_openai_gpt_models.md # GPT-5 model specifications
βββ specs/ # Technical specifications
- Python 3.12+ (required for proper typing support)
- uv package manager
- OpenAI API key (for GPT-5 model tests)
cd apps/nano_agent_mcp_server
uv sync --extra test # Include test dependenciesIf you're using Claude Code to work on this codebase, the project includes hooks for enhanced development experience. The hooks use relative paths by default for portability.
To activate hooks with absolute paths for your local environment: Convert relative paths to absolute paths in .claude/settings.local.json Run this command in Claude Code: This updates all hook paths to use your machine's absolute paths A backup is automatically created at .claude/settings.json.backup
/convert_paths_absolute.md
Note: The hooks are optional but provide useful features like:
- Pre/post tool use notifications
- Session tracking
- Event logging for debugging
For production use, see Installation section above.
When working with UV and optional dependencies:
uv sync- Installs only the main dependencies (mcp, typer, rich)uv sync --extra test- Installs main + test dependencies (includes pytest, openai, etc.)uv sync --all-extras- Installs main + all optional dependency groupsuv pip list- Shows all installed packages in the virtual environment
Important: Always use --extra test when you need to run tests, as uv sync alone will remove test dependencies.
- Copy the environment template:
cp .env.sample .env- Add your OpenAI API key:
echo "OPENAI_API_KEY=sk-your-key-here" > .envcd apps/nano_agent_mcp_server
uv run nano-agent --helpThe server communicates via stdin/stdout using the MCP protocol.
Key Concept: This is a nested agent system with two distinct agent layers.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β OUTER AGENT (e.g., Claude Code, any MCP client) β
β β’ Communicates via MCP protocol β
β β’ Sees ONE tool: prompt_nano_agent β
β β’ Sends natural language prompts to nano-agent β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
β MCP Protocol
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β NANO-AGENT MCP SERVER (apps/nano_agent_mcp_server) β
β β’ Exposes SINGLE MCP tool: prompt_nano_agent β
β β’ Receives prompts from outer agent β
β β’ Spawns internal OpenAI agent to handle request β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
β Creates & Manages
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β INNER AGENT (OpenAI GPT with function calling) β
β β’ Created fresh for each prompt_nano_agent call β
β β’ Has its OWN tools (not visible to outer agent): β
β - read_file: Read file contents β
β - list_directory: List directory contents β
β - write_file: Create/overwrite files β
β - get_file_info: Get file metadata β
β β’ Runs autonomous loop (max 20 turns) β
β β’ Returns final result to MCP server β outer agent β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# Run all integration tests
uv run pytest tests/ -v
# Test specific functionality
uv run pytest tests/nano_agent/modules/test_nano_agent.py::TestExecuteNanoAgent -v
# Quick validation
uv run pytest -k "test_execute_nano_agent_success" -v# Validate tools work (no API needed)
uv run nano-cli test-tools
# Quick agent test
export OPENAI_API_KEY=sk-your-key
uv run nano-cli run "What is 2+2?" # Uses DEFAULT_MODELThe nano agent supports multiple LLM providers through a unified interface using the OpenAI SDK. All providers are accessed through OpenAI-compatible endpoints, providing a consistent API.
Feel free to add/remove providers and models as you see fit.
- Models:
gpt-5,gpt-5-mini(default),gpt-5-nano,gpt-4o - Requirements:
OPENAI_API_KEYenvironment variable - Special Features:
- GPT-5 models use
max_completion_tokensinstead ofmax_tokens - GPT-5 models only support temperature=1
- Extended context windows (400K tokens)
- GPT-5 models use
- Models:
claude-opus-4-1-20250805,claude-opus-4-20250514,claude-sonnet-4-20250514,claude-3-haiku-20240307 - Requirements:
ANTHROPIC_API_KEYenvironment variable - Implementation: Uses Anthropic's OpenAI-compatible endpoint
- Base URL:
https://api.anthropic.com/v1/
- Models:
gpt-oss:20b,gpt-oss:120b, or any model you've pulled locally - Requirements: Ollama service running locally
- Implementation: Uses Ollama's OpenAI-compatible API
- Base URL:
http://localhost:11434/v1
# OpenAI (default)
uv run nano-cli run "Create a hello world script"
# Use specific OpenAI model
uv run nano-cli run "Analyze this code" --model gpt-5 --provider openai
# Anthropic
uv run nano-cli run "Write a test file" --model claude-3-haiku-20240307 --provider anthropic
# Ollama (local)
uv run nano-cli run "List files" --model gpt-oss:20b --provider ollamaThe nano-agent includes a sophisticated multi-layer evaluation system for comparing LLM performance across different providers and models. This creates a level playing field for benchmarking by using the same execution environment (OpenAI Agent SDK) regardless of the underlying provider.
"Don't trust any individual benchmark. You need to crack open the hood of all these models and say, where is the true value?" - Engineering is all about trade-offs.
The evaluation system's core innovation is the HOP/LOP (Higher Order Prompt / Lower Order Prompt) pattern, which creates a hierarchical orchestration system for parallel model testing:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 1. HIGHER ORDER PROMPT (HOP) β
β File: .claude/commands/perf/hop_evaluate_nano_agents.md β
β β’ Orchestrates entire evaluation process β
β β’ Accepts test case files as $ARGUMENTS β
β β’ Formats and grades results β
β β’ Generates performance comparison tables β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
β Reads & Executes
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 2. LOWER ORDER PROMPT (LOP) β
β Files: .claude/commands/perf/lop_eval_*.md β
β β’ Defines test cases (prompts to evaluate) β
β β’ Lists agents to test (@agent-nano-agent-*) β
β β’ Specifies expected outputs β
β β’ Provides grading rubrics β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
β @agent References
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 3. CLAUDE CODE SUB-AGENTS β
β Files: .claude/agents/nano-agent-*.md β
β β’ Individual agent configurations β
β β’ Each specifies model + provider combination β
β β’ Color-coded by model family: β
β - green: GPT-5 series (nano, mini, standard) β
β - blue: GPT-OSS series (20b, 120b) β
β - purple: Claude 4 Opus models β
β - orange: Claude 4 Sonnet & Claude 3 Haiku β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
β Calls MCP Server
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 4. NANO-AGENT MCP SERVER β
β Function: prompt_nano_agent(prompt, model, provider) β
β β’ Creates isolated agent instance per request β
β β’ Uses OpenAI Agent SDK for ALL providers β
β β’ Ensures consistent execution environment β
β β’ Returns structured results with metrics β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Fair Comparison: All models use the same OpenAI Agent SDK, eliminating implementation differences
- Parallel Execution: Agents run simultaneously, reducing temporal variations
- Structured Metrics: Consistent tracking of time, tokens, and costs across all providers
- Extensibility: Easy to add new models, providers, or test cases
- Visual Hierarchy: Color-coded agents make results easy to scan in Claude Code
- Reproducibility: Same prompts and execution environment ensure consistent benchmarks
MIT
And prepare for Agentic Engineering
Learn to code with AI with foundational Principles of AI Coding
Follow the IndyDevDan youtube channel for more AI coding tips and tricks.
