OllamaClaw is a Telegram-first Go coding agent which uses Ollama. Hacking on this to use as a playground for different ideas and experiements. Currently using it to run crons, reminders, and small tasks.
Current app version: 0.1.6.
It supports:
- Shared agent core for
replandtelegrammodes - Built-in tools:
bash,read_file,write_file,web_search,web_fetch,system_prompt_get,system_prompt_update,system_prompt_history,system_prompt_rollback - Local SQLite persistence with per-chat sessions
- Context compaction (summary + recent turns)
# install from source repo
go install github.com/ParthSareen/OllamaClaw@latest
# or build locally in the repo
go build -o ollamaclaw ../ollamaclaw launchIf config is missing, OllamaClaw opens an interactive setup UI. It will ask for:
- Ollama host
- Default model
- Telegram bot token
- Telegram owner ID
The owner ID is used for both owner_chat_id and owner_user_id.
./ollamaclaw configure./ollamaclaw replThe bot only handles private chats and only responds to the configured owner allowlist.
launch prints live runtime logs (updates, commands, tool calls, cron output, and errors) to stdout.
Optional:
./ollamaclaw repl --model kimi-k2.5:cloudollamaclaw repl [--model <name>]
ollamaclaw launch
ollamaclaw configure
ollamaclaw telegram init [--token <telegram-bot-token>] [--owner-id <id>] [--owner-chat-id <id>] [--owner-user-id <id>]
ollamaclaw telegram run # legacy alias for launch/startshows onboarding/help text/helpshows usage/model [name]shows/sets per-chat model/toolslists built-in tools/cron list [active|all]lists cron jobs- Cron schedules and displayed cron timestamps are interpreted in
America/Los_Angeles(PST/PDT) - Cron timezone prefixes (
TZ=/CRON_TZ=) are intentionally rejected; OllamaClaw always runs cron schedules inAmerica/Los_Angeles /cron safe <id>marks a cron as safe (Telegram bash approvals auto-approve for that cron)/cron unsafe <id>removes safe mode from a cron/cron prefetch list <id>shows learned prefetched commands for a cron job- Cron jobs auto-learn stable bash commands from prior runs and prefetch them on future runs (
auto_prefetchon by default) - Prefetched commands are executed immediately before each cron agent turn and injected as synthetic
bashtool-call context withrun_id,run_started_at, and per-commandfetched_attimestamps; only the current run'srun_idcontext is visible to the model - Telegram bash policy defaults to allow for non-destructive commands; potentially destructive commands require approval; critical lifecycle commands remain blocked
/show tools [on|off]toggles live tool event messages/show thinking [on|off]toggles thinking visibility mode/show dreaming [on|off]toggles background long-term-memory (“dreaming”) event notifications for this chat (default: on)/verbose [on|off]enables/disables tool + thinking traces for this chat session/think [on|off|low|medium|high|default]shows/sets think value/statusshows model, estimated next prompt size (len(request_json)/4), dreaming notification state, lifetime token counters, compaction thresholds, last compaction snapshot, DB path/fullsystemshows the exact system context currently injected (system prompt + core memories + latest conversation summary)/resetarchives current session and starts a fresh one/stopinterrupts the active turn/restartrestarts the launch loop from Telegram- Send photos (or image documents) with an optional caption; image bytes are fetched from Telegram and forwarded to Ollama chat
images - If messages arrive in quick succession, OllamaClaw waits for a 1.5s quiet window, coalesces them with newlines, then runs one turn
Input:
{"command":"ls -la","timeout_seconds":30}Output:
{"exit_code":0,"stdout":"...","stderr":""}Input:
{"path":"/absolute/or/relative/path.txt"}Output:
{"path":"...","content":"..."}Input:
{"path":"./notes.txt","content":"hello","create_dirs":true}Output:
{"path":"./notes.txt","bytes_written":5}Input:
{"query":"latest ollama release","max_results":5}Output:
{"results":[{"title":"...","url":"...","content":"..."}]}Input:
{"url":"https://ollama.com"}Output:
{"title":"...","content":"...","links":["..."]}Reads managed system prompt details (base/overlay paths, overlay content, optional revision history).
Safely updates only the managed overlay (set, append, clear) with revision history logging.
Lists recent managed overlay revisions.
Rolls managed overlay back to a prior revision from system_prompt_history.
web_search and web_fetch use Ollama hosted APIs and require:
export OLLAMA_API_KEY=...File: ~/.ollamaclaw/config.json
Runtime system prompt file: ~/.ollamaclaw/system_prompt.txt (read dynamically each turn; falls back to built-in prompt if missing/empty)
Managed system prompt overlay file: ~/.ollamaclaw/system_prompt.overlay.md (agent-updatable layer)
Managed overlay history file: ~/.ollamaclaw/system_prompt.overlay.history.jsonl (append-only revision log)
Core memories file: ~/.ollamaclaw/core_memories.md (updated in background every 10 user turns and injected as a system context block)
Defaults:
{
"ollama_host": "http://localhost:11434",
"default_model": "kimi-k2.5:cloud",
"db_path": "~/.ollamaclaw/state.db",
"compaction_threshold": 0.8,
"keep_recent_turns": 8,
"context_window_tokens": 252000,
"tool_output_max_bytes": 16384,
"bash_timeout_seconds": 120,
"telegram": {
"bot_token": "",
"owner_chat_id": 0,
"owner_user_id": 0
}
}SQLite database: ~/.ollamaclaw/state.db
Tables:
settingssessionsmessagescompactionscron_jobscron_prefetch_commands
Compaction archives old rows (archived=1) and keeps raw history in SQLite.
- Trigger: prompt token count from Ollama exceeds configured threshold (
context_window_tokens * compaction_threshold) - Action: summarize older unarchived history using Ollama
- Result: save summary in
compactions, archive old messages, keep recent turns active - Active prompt:
system + latest summary + unarchived recent messages - Telegram sends a compaction notice message when compaction happens during a turn (including background cron-triggered turns sent to Telegram sessions)
- Trigger: every
10user turns per session (role=usermessages only) - Telegram notifies the session when a background core-memory refresh starts/completes/fails (toggle with
/show dreaming on|off) - Dreaming completion notifications include a programmatic change summary (added/removed/kept items, char count delta, and short added/removed previews) with no extra LLM call
- Runs in background (non-blocking to active chat/cron turn)
- Summarizes stable preferences/workflows/constraints from recent dialogue
- Writes to
~/.ollamaclaw/core_memories.mdusing managed markers - Enforces a hard cap of
4000characters for stored/injected core memory content - Injects managed core memories into prompt context as a dedicated system message
go test ./...
go build ./...