Skip to content

ParthSareen/OllamaClaw

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OllamaClaw

OllamaClaw is a Telegram-first Go coding agent which uses Ollama. Hacking on this to use as a playground for different ideas and experiements. Currently using it to run crons, reminders, and small tasks. Current app version: 0.1.6.

It supports:

  • Shared agent core for repl and telegram modes
  • Built-in tools: bash, read_file, write_file, web_search, web_fetch, system_prompt_get, system_prompt_update, system_prompt_history, system_prompt_rollback
  • Local SQLite persistence with per-chat sessions
  • Context compaction (summary + recent turns)

Install

# install from source repo
go install github.com/ParthSareen/OllamaClaw@latest

# or build locally in the repo
go build -o ollamaclaw .

Quickstart

1) Launch (auto-onboarding)

./ollamaclaw launch

If config is missing, OllamaClaw opens an interactive setup UI. It will ask for:

  • Ollama host
  • Default model
  • Telegram bot token
  • Telegram owner ID

The owner ID is used for both owner_chat_id and owner_user_id.

2) Update config later

./ollamaclaw configure

3) Run REPL mode

./ollamaclaw repl

The bot only handles private chats and only responds to the configured owner allowlist. launch prints live runtime logs (updates, commands, tool calls, cron output, and errors) to stdout.

Optional:

./ollamaclaw repl --model kimi-k2.5:cloud

CLI

ollamaclaw repl [--model <name>]
ollamaclaw launch
ollamaclaw configure
ollamaclaw telegram init [--token <telegram-bot-token>] [--owner-id <id>] [--owner-chat-id <id>] [--owner-user-id <id>]
ollamaclaw telegram run   # legacy alias for launch

Telegram commands

  • /start shows onboarding/help text
  • /help shows usage
  • /model [name] shows/sets per-chat model
  • /tools lists built-in tools
  • /cron list [active|all] lists cron jobs
  • Cron schedules and displayed cron timestamps are interpreted in America/Los_Angeles (PST/PDT)
  • Cron timezone prefixes (TZ= / CRON_TZ=) are intentionally rejected; OllamaClaw always runs cron schedules in America/Los_Angeles
  • /cron safe <id> marks a cron as safe (Telegram bash approvals auto-approve for that cron)
  • /cron unsafe <id> removes safe mode from a cron
  • /cron prefetch list <id> shows learned prefetched commands for a cron job
  • Cron jobs auto-learn stable bash commands from prior runs and prefetch them on future runs (auto_prefetch on by default)
  • Prefetched commands are executed immediately before each cron agent turn and injected as synthetic bash tool-call context with run_id, run_started_at, and per-command fetched_at timestamps; only the current run's run_id context is visible to the model
  • Telegram bash policy defaults to allow for non-destructive commands; potentially destructive commands require approval; critical lifecycle commands remain blocked
  • /show tools [on|off] toggles live tool event messages
  • /show thinking [on|off] toggles thinking visibility mode
  • /show dreaming [on|off] toggles background long-term-memory (“dreaming”) event notifications for this chat (default: on)
  • /verbose [on|off] enables/disables tool + thinking traces for this chat session
  • /think [on|off|low|medium|high|default] shows/sets think value
  • /status shows model, estimated next prompt size (len(request_json)/4), dreaming notification state, lifetime token counters, compaction thresholds, last compaction snapshot, DB path
  • /fullsystem shows the exact system context currently injected (system prompt + core memories + latest conversation summary)
  • /reset archives current session and starts a fresh one
  • /stop interrupts the active turn
  • /restart restarts the launch loop from Telegram
  • Send photos (or image documents) with an optional caption; image bytes are fetched from Telegram and forwarded to Ollama chat images
  • If messages arrive in quick succession, OllamaClaw waits for a 1.5s quiet window, coalesces them with newlines, then runs one turn

Built-in tools

bash

Input:

{"command":"ls -la","timeout_seconds":30}

Output:

{"exit_code":0,"stdout":"...","stderr":""}

read_file

Input:

{"path":"/absolute/or/relative/path.txt"}

Output:

{"path":"...","content":"..."}

write_file

Input:

{"path":"./notes.txt","content":"hello","create_dirs":true}

Output:

{"path":"./notes.txt","bytes_written":5}

web_search

Input:

{"query":"latest ollama release","max_results":5}

Output:

{"results":[{"title":"...","url":"...","content":"..."}]}

web_fetch

Input:

{"url":"https://ollama.com"}

Output:

{"title":"...","content":"...","links":["..."]}

system_prompt_get

Reads managed system prompt details (base/overlay paths, overlay content, optional revision history).

system_prompt_update

Safely updates only the managed overlay (set, append, clear) with revision history logging.

system_prompt_history

Lists recent managed overlay revisions.

system_prompt_rollback

Rolls managed overlay back to a prior revision from system_prompt_history.

web_search and web_fetch use Ollama hosted APIs and require:

export OLLAMA_API_KEY=...

Config

File: ~/.ollamaclaw/config.json

Runtime system prompt file: ~/.ollamaclaw/system_prompt.txt (read dynamically each turn; falls back to built-in prompt if missing/empty) Managed system prompt overlay file: ~/.ollamaclaw/system_prompt.overlay.md (agent-updatable layer) Managed overlay history file: ~/.ollamaclaw/system_prompt.overlay.history.jsonl (append-only revision log) Core memories file: ~/.ollamaclaw/core_memories.md (updated in background every 10 user turns and injected as a system context block)

Defaults:

{
  "ollama_host": "http://localhost:11434",
  "default_model": "kimi-k2.5:cloud",
  "db_path": "~/.ollamaclaw/state.db",
  "compaction_threshold": 0.8,
  "keep_recent_turns": 8,
  "context_window_tokens": 252000,
  "tool_output_max_bytes": 16384,
  "bash_timeout_seconds": 120,
  "telegram": {
    "bot_token": "",
    "owner_chat_id": 0,
    "owner_user_id": 0
  }
}

Persistence

SQLite database: ~/.ollamaclaw/state.db

Tables:

  • settings
  • sessions
  • messages
  • compactions
  • cron_jobs
  • cron_prefetch_commands

Compaction archives old rows (archived=1) and keeps raw history in SQLite.

Compaction behavior

  • Trigger: prompt token count from Ollama exceeds configured threshold (context_window_tokens * compaction_threshold)
  • Action: summarize older unarchived history using Ollama
  • Result: save summary in compactions, archive old messages, keep recent turns active
  • Active prompt: system + latest summary + unarchived recent messages
  • Telegram sends a compaction notice message when compaction happens during a turn (including background cron-triggered turns sent to Telegram sessions)

Core memories behavior

  • Trigger: every 10 user turns per session (role=user messages only)
  • Telegram notifies the session when a background core-memory refresh starts/completes/fails (toggle with /show dreaming on|off)
  • Dreaming completion notifications include a programmatic change summary (added/removed/kept items, char count delta, and short added/removed previews) with no extra LLM call
  • Runs in background (non-blocking to active chat/cron turn)
  • Summarizes stable preferences/workflows/constraints from recent dialogue
  • Writes to ~/.ollamaclaw/core_memories.md using managed markers
  • Enforces a hard cap of 4000 characters for stored/injected core memory content
  • Injects managed core memories into prompt context as a dedicated system message

Development

go test ./...
go build ./...

About

A minimal take on OpenClaw using Ollama

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages