A production-grade AI coding agent CLI built in Go
Inspired by Claude Code's architecture β rebuilt from scratch with more features and better performance
| Feature | Description |
|---|---|
| π€ Multi-provider LLM | Anthropic Claude + OpenAI + any OpenAI-compatible API |
| π οΈ 14 Built-in Tools | Read/Write/Edit files, Bash, Grep, Glob, WebFetch, WebSearch, Git, Todo, Notebook |
| π Permission System | 3 modes (default/auto/plan) + two-stage safety classifier |
| πΎ Auto Memory | Cross-session memory stored as Markdown files (~/.codeany/memory/) |
| π MCP Support | JSON-RPC 2.0 over stdio β connect any MCP server |
| π§ LSP Integration | Auto-inject compiler diagnostics after each file write |
| π Agent Teams | Spawn parallel sub-agents with git worktree isolation |
| π¦ Plugin System | Extend with skills, agents, hooks, commands, MCP/LSP servers |
| β‘ Prompt Caching | 3-layer caching (system prompt + tools + last tool_result) |
| ποΈ Auto Compact | Auto-summarize conversation when context hits 85% |
| π₯οΈ Rich TUI | Bubbletea UI with streaming, glamour markdown, status bar |
| π Cost Tracking | Real-time token usage + estimated API cost |
- Pure Go binary β no Node.js runtime, instant startup
- Multi-provider β works with Anthropic, OpenAI, local Ollama, or any OpenAI-compatible API
- Deferred tools β low-frequency tools loaded on demand, reducing token overhead ~40%
- Two-stage permission classifier β pattern matching first, Haiku LLM fallback for ambiguous commands
- Configurable everything β permission mode, context limits, model, memory, LSP all configurable
curl -fsSL https://raw.githubusercontent.com/thinkany-ai/codeany/main/install.sh | shThe script auto-detects your OS and architecture, downloads the latest binary from GitHub Releases, and installs it to /usr/local/bin.
Download the pre-built binary for your platform from GitHub Releases:
| Platform | File |
|---|---|
| macOS (Apple Silicon) | codeany_darwin_arm64 |
| macOS (Intel) | codeany_darwin_amd64 |
| Linux (x86_64) | codeany_linux_amd64 |
| Linux (ARM64) | codeany_linux_arm64 |
| Windows (x86_64) | codeany_windows_amd64.exe |
# Example: macOS Apple Silicon
curl -fsSL https://github.com/thinkany-ai/codeany/releases/latest/download/codeany_darwin_arm64 -o codeany
chmod +x codeany
sudo mv codeany /usr/local/bin/git clone https://github.com/thinkany-ai/codeany.git
cd codeany
go build -o codeany .
sudo mv codeany /usr/local/bin/go install github.com/thinkany-ai/codeany@latest# Set API key
export ANTHROPIC_API_KEY="sk-ant-..."
# Launch interactive mode
codeany
# Start with a prompt
codeany "review the code in this directory and suggest improvements"
# Non-interactive (pipe-friendly)
codeany --print "what does main.go do?"default_model: claude-sonnet-4-5
permission_mode: default # default | auto | plan
max_iterations: 25
context_window: 200000
compact_threshold: 0.85
memory_enabled: true
lsp_enabled: true
models:
anthropic:
api_key: "" # or set ANTHROPIC_API_KEY env var
openai:
api_key: "" # or set OPENAI_API_KEY env var
base_url: https://api.openai.com/v1
mcp_servers: []ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...codeany [flags] [initial_prompt]
Flags:
-m, --model string Model to use (e.g. claude-sonnet-4-5, gpt-4o)
-d, --dir string Working directory (default: current dir)
-p, --print Non-interactive mode, print response and exit
--mode string Permission mode: default | auto | plan
--no-memory Disable memory system
--no-lsp Disable LSP integration
-v, --version Show version
| Key | Action |
|---|---|
Enter |
Send message |
Shift+Enter / Ctrl+J |
Insert newline |
Ctrl+C / Ctrl+D |
Exit |
Ctrl+L |
Clear screen |
Up / Down |
Scroll history |
| Command | Description |
|---|---|
/help |
Show help |
/clear |
Clear conversation |
/model <name> |
Switch model |
/cost |
Show token usage & cost |
/skills |
List available skills |
/compact |
Manually compact conversation |
/plan |
Switch to plan (read-only) mode |
/auto |
Switch to auto mode |
Trigger with /skillname:
| Skill | Description |
|---|---|
/commit |
Generate conventional commit message from git diff and commit |
/pr |
Generate PR title + description from diff against main |
/review |
Code review of staged/unstaged changes |
/init |
Scan project and generate CODEANY.md config file |
| Mode | Behavior |
|---|---|
default |
Safe tools auto-run; dangerous + write ops require confirmation |
auto |
All tools auto-run (deny rules still apply); suitable for CI |
plan |
Read-only sandbox; write + dangerous ops silently blocked |
| Tool | Type | Description |
|---|---|---|
read |
safe | Read file contents with line numbers, offset/limit pagination |
write |
write | Write/create files (auto mkdir) |
edit |
write | Precise string replacement (errors if not unique) |
bash |
dangerous | Execute shell commands (120s timeout, output truncated >500 lines) |
grep |
safe | Regex search (self-implemented, 3 output modes) |
glob |
safe | Find files by pattern |
list_dir |
safe | List directory with git status |
web_fetch |
safe | Fetch URL and strip HTML |
web_search |
safe | DuckDuckGo search, top 10 results |
git |
safe | git status/log/diff/add/commit/push |
todo_read |
deferred | Read todo list |
todo_write |
deferred | Write todo list |
notebook_read |
deferred | Read notebook |
tool_search |
safe | Discover deferred tools by keyword |
CodeAny remembers things across sessions using Markdown files in ~/.codeany/memory/:
~/.codeany/memory/
βββ {project-hash}/
βββ MEMORY.md # Index (auto-injected into system prompt)
βββ user_prefs.md # Your preferences and work style
βββ feedback.md # Behavior corrections from past sessions
βββ project_notes.md # Project conventions
βββ references.md # Links, boards, docs
Memory files are plain Markdown β you can edit them directly.
Add MCP servers to ~/.codeany/config.yaml:
mcp_servers:
- name: filesystem
command: npx
args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
- name: github
command: npx
args: ["-y", "@modelcontextprotocol/server-github"]
env:
GITHUB_TOKEN: "${GITHUB_TOKEN}"MCP tools appear as {server_name}__{tool_name} in the agent.
codeany/
βββ main.go
βββ cmd/ # CLI (cobra)
βββ core/ # Agent loop, session, context, prompt builder
βββ llm/ # LLM clients (Anthropic, OpenAI, streaming SSE)
βββ tools/ # 14 tools + registry + executor pipeline
βββ permissions/ # 3-mode permission system + two-stage classifier
βββ memory/ # Filesystem-based cross-session memory
βββ mcp/ # MCP client (JSON-RPC 2.0 over stdio)
βββ lsp/ # LSP client (Content-Length framing)
βββ skills/ # Built-in skills + plugin/project skill loading
βββ agents/ # Agent Teams + sub-agent execution
βββ plugins/ # Plugin manifest loader
βββ tui/ # Terminal UI (bubbletea + lipgloss + glamour)
βββ config/ # Configuration (viper)
MIT β see LICENSE