Summary
I'd like to propose an agentic mode for Kimu — a flag-gated extension that lets an external AI agent (Claude Code CLI or similar) draft an initial project_data/<id>.json plan, which the user then edits in the existing timeline, with Kimu's undo stack emitting structured diff events so the agent can learn the user's editing style across sessions.
This issue is for early design feedback before any PR lands, per standard friendly-fork etiquette.
What the feature does
- An external planner writes a new
project_data/<id>.json (same schema Kimu already uses, optionally carrying a pass-through provenance block that Kimu treats as opaque).
- The user opens the project in Kimu and edits normally.
- Kimu's existing
useTimeline.ts undo stack emits structured RFC 6902 diff events (one per command-commit boundary — not per mousemove) into a sibling project_data/<id>.diff-log.jsonl.
- The external planner reads that JSONL asynchronously, extracts patterns across sessions, and writes back a small
editing-preferences.json it can consult the next time it drafts a plan.
The editor itself does not embed an AI client. All planning runs out-of-process; Kimu only gains file I/O for the diff log plus a reducer-middleware hook on snapshotTimeline.
How it fits Kimu's existing architecture
useTimeline.ts — already snapshots on snapshotTimeline(). Proposal: add an optional onSnapshot?: (prev, next, meta) => void callback. Zero refactor of the useState model.
project_data/*.json — already the canonical persistence format via timeline.store.ts. Proposal: preserve unknown top-level keys on save (most JSON serializers already do this) so a provenance block added by an external tool survives round-trips.
backend/tools_registry.py + backend/schema.py — already define a universal tool-call envelope (UniversalToolCall, FunctionCallResponse). Proposal: keep that shape as-is so swapping the LLM provider is a single-file change.
backend/main.py — currently calls google-genai. I intend to open a follow-up PR that introduces a thin provider abstraction so the /ai endpoint can target either Gemini or Anthropic via env config; this is a generic improvement independent of the agentic-mode feature.
docker-compose.dev.yml — I'd also propose defaulting the dev compose port bindings to 127.0.0.1 (AGPL Section 13 hygiene), opt-in LAN exposure via a separate override file.
AGPL-3.0 compatibility approach
- All in-process AI calls from Kimu's backend use
@anthropic-ai/sdk (TypeScript) or anthropic (Python) — both MIT licensed and fully AGPL-compatible.
- The proprietary
@anthropic-ai/claude-agent-sdk is never added as a dependency of Kimu. It lives in a separate, non-distributed sidecar tool that invokes claude-code as a subprocess on the operator's machine — Kimu and the sidecar only communicate via files.
- Docker dev compose defaults to loopback-only binds so casual demos don't accidentally trigger AGPL Section 13.
What I'd like to know before writing PRs
- Is the
onSnapshot callback approach acceptable, or would you prefer a different instrumentation point?
- Is a pass-through
provenance key on project.json acceptable, or should external metadata live in a sibling file?
- Would you be open to the provider-abstraction PR for
backend/main.py even if the agentic-mode work landed separately?
- Any existing branches / discussions I should align with before proposing diffs?
Happy to split this into smaller, focused PRs — diff-log emission first (purely additive), provider abstraction second, schema pass-through third — so each can be reviewed independently.
Thanks for building Kimu!
Summary
I'd like to propose an agentic mode for Kimu — a flag-gated extension that lets an external AI agent (Claude Code CLI or similar) draft an initial
project_data/<id>.jsonplan, which the user then edits in the existing timeline, with Kimu's undo stack emitting structured diff events so the agent can learn the user's editing style across sessions.This issue is for early design feedback before any PR lands, per standard friendly-fork etiquette.
What the feature does
project_data/<id>.json(same schema Kimu already uses, optionally carrying a pass-throughprovenanceblock that Kimu treats as opaque).useTimeline.tsundo stack emits structured RFC 6902 diff events (one per command-commit boundary — not per mousemove) into a siblingproject_data/<id>.diff-log.jsonl.editing-preferences.jsonit can consult the next time it drafts a plan.The editor itself does not embed an AI client. All planning runs out-of-process; Kimu only gains file I/O for the diff log plus a reducer-middleware hook on
snapshotTimeline.How it fits Kimu's existing architecture
useTimeline.ts— already snapshots onsnapshotTimeline(). Proposal: add an optionalonSnapshot?: (prev, next, meta) => voidcallback. Zero refactor of theuseStatemodel.project_data/*.json— already the canonical persistence format viatimeline.store.ts. Proposal: preserve unknown top-level keys on save (most JSON serializers already do this) so aprovenanceblock added by an external tool survives round-trips.backend/tools_registry.py+backend/schema.py— already define a universal tool-call envelope (UniversalToolCall,FunctionCallResponse). Proposal: keep that shape as-is so swapping the LLM provider is a single-file change.backend/main.py— currently callsgoogle-genai. I intend to open a follow-up PR that introduces a thin provider abstraction so the/aiendpoint can target either Gemini or Anthropic via env config; this is a generic improvement independent of the agentic-mode feature.docker-compose.dev.yml— I'd also propose defaulting the dev compose port bindings to127.0.0.1(AGPL Section 13 hygiene), opt-in LAN exposure via a separate override file.AGPL-3.0 compatibility approach
@anthropic-ai/sdk(TypeScript) oranthropic(Python) — both MIT licensed and fully AGPL-compatible.@anthropic-ai/claude-agent-sdkis never added as a dependency of Kimu. It lives in a separate, non-distributed sidecar tool that invokesclaude-codeas a subprocess on the operator's machine — Kimu and the sidecar only communicate via files.What I'd like to know before writing PRs
onSnapshotcallback approach acceptable, or would you prefer a different instrumentation point?provenancekey onproject.jsonacceptable, or should external metadata live in a sibling file?backend/main.pyeven if the agentic-mode work landed separately?Happy to split this into smaller, focused PRs — diff-log emission first (purely additive), provider abstraction second, schema pass-through third — so each can be reviewed independently.
Thanks for building Kimu!