Skip to content

nmyinger/latticework

Repository files navigation

Latticework

A living map of mental models, decisions, and compounding

Latticework is a personal knowledge system that turns raw thoughts into a clean, navigable graph of mental models and decision trees. It is designed around progressive formalization: capture quickly, let the system suggest structure, then curate a canonical graph that can be published and explored.

If you want the long-form vision and language, start with introduction.md. For deeper technical detail, see technical_overview.md. A plain-language glossary lives in concepts.md.


What lives in this repo (tour)

Top-level:

  • mind/: canonical knowledge graph + capture stream.
  • tools/: pipeline scripts (suggest, apply, build, cleanup, wipe).
  • build/: generated artifacts (graph JSON + HTML).
  • app.py: Streamlit UI for ingesting text and rendering a graph.
  • latticework_pipeline.ipynb: end-to-end notebook pipeline.
  • lib/: vendored JS/CSS assets used by HTML graph rendering.
  • archive/: older experiments (kept out of the main pipeline).
  • app/: reserved for a future web app frontend.
  • requirements.txt: Python dependencies.
  • introduction.md, technical_overview.md, concepts.md: product and architecture docs.
  • .env: local secrets (never commit; use .gitignore).

mind/ layout (source of truth):

  • mind/nodes/*.md: canonical nodes (one file per model).
  • mind/edges/edges.jsonl: canonical edges (append-friendly JSONL).
  • mind/trees/*.yml: decision trees (YAML).
  • mind/inbox/thoughts.jsonl: raw capture stream.
  • mind/inbox/suggestions.jsonl: system proposals.
  • mind/inbox/drafts/: auto-generated drafts waiting for promotion.
  • mind/cleanup/cleanup_log.jsonl: auto-merge log with undo support.
  • mind/cleanup/wipe_log.jsonl: destructive wipe log + backups.

build/ outputs:

  • build/graph.json: full graph (private + public).
  • build/graph.public.json: visibility-filtered graph.
  • build/graph.html: rendered HTML graph (via app.py).

Core architecture (in one screen)

  1. Capture raw thoughts quickly into mind/inbox/thoughts.jsonl.
  2. Generate suggestions (tools/suggest.py):
    • attach example to existing node
    • create a new node draft
    • propose new edges
  3. Approve and apply suggestions (tools/apply_suggestions.py):
    • drafts become canonical nodes
    • edges are appended to edges.jsonl
  4. Auto-clean the graph (tools/graph_cleanup.py) to merge overlaps.
  5. Build graph artifacts (tools/build_graph.py) for UI and publishing.
  6. Explore the graph in a UI (app.py) or render artifacts for a web app.

This keeps capture fast and the canonical graph clean.


Data model (canonical formats)

Nodes are Markdown with YAML frontmatter, stored in mind/nodes/.

Example:

---
id: mm_000001
type: mental_model
title: Geographical Orientation
summary: Understanding the Earth's poles as points of reference for navigation.
mechanism: The Earth is divided into hemispheres; poles are anchor points.
predictions:
  - People orient using poles.
signals:
  - Compasses point north.
actions:
  - Teach pole-based navigation.
failure_modes:
  - Oversimplifies cultural geography.
tags: [geography, navigation]
visibility: private
created_at: 2025-12-20T05:20:59Z
updated_at: 2025-12-20T05:20:59Z
---

## Notes
- Concrete examples or refinements here.

Edges are JSONL (one edge per line), stored in mind/edges/edges.jsonl:

{"id":"e_00000001","from":"mm_000001","to":"mm_000002","rel":"supports","weight":0.7,"rationale":"Orientation affects regional perceptions.","visibility":"private","created_at":"2025-12-20T05:21:47Z","created_from":"suggestion"}

Capture stream (raw thoughts), mind/inbox/thoughts.jsonl:

{"id":"t_2025-12-20_052059","ts":"2025-12-20T05:20:59Z","text":"Africa is mostly south of the equator","context":{},"visibility":"private"}

Suggestions (system proposals), mind/inbox/suggestions.jsonl:

{"id":"s_2025-12-20_052147","ts":"2025-12-20T05:21:47Z","thought_id":"t_...","proposals":[{"kind":"new_node_draft","draft_path":"..."}],"status":"pending"}

Decision trees (YAML), mind/trees/*.yml:

id: tree_bet_selection_v1
title: Bet selection for power law payouts
start: q1
nodes:
  q1:
    type: question
    text: "Is downside capped?"
    yes: q2
    no: stop_pass

Quickstart (CLI)

Setup:

python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Set your API key (do not commit .env):

export OPENAI_API_KEY="..."
# or: export LATTICEWORK_API_KEY="..."

Run the pipeline on a thought:

python tools/suggest.py --text "Africa is mostly south of the equator"
python tools/apply_suggestions.py --approve-latest
python tools/graph_cleanup.py
python tools/build_graph.py

Launch the UI:

streamlit run app.py

If you want a pure heuristic run (no LLM):

python tools/suggest.py --text "..." --no-llm

Streamlit UI (app.py)

app.py is the current UI entry point. It does the following:

  • runs cleanup before processing new input
  • generates suggestions from input text (LLM or fallback)
  • optionally auto-approves and applies suggestions
  • rebuilds build/graph.json
  • renders an HTML graph using PyVis
  • shows cleanup log entries and provides undo commands

Launch with:

streamlit run app.py

Pipeline tools (what each script does)

tools/suggest.py

  • input: text (CLI, file, or stdin)
  • output: appends to mind/inbox/thoughts.jsonl and suggestions.jsonl
  • optional LLM use; configurable via environment variables

tools/apply_suggestions.py

  • input: mind/inbox/suggestions.jsonl
  • output: promotes drafts to mind/nodes/*.md, appends edges
  • --approve-latest for quick approval

tools/graph_cleanup.py

  • auto-merges overlapping nodes and logs actions
  • undo support with --undo <merge_id>
  • safe by default; --force only if content drift exists

tools/build_graph.py

  • compiles canonical files into build/graph.json and build/graph.public.json
  • warns if edges point to missing nodes

tools/wipe_graph.py

  • destructive reset for the graph and inbox
  • backs up by default, requires --yes

Graph cleanup and merge logic

Cleanup is rule-based and extensible. Current rule: overlap_merge in tools/cleanup_rules/overlap_merge.py.

How merges work:

  • candidates are detected by title/summary/tag similarity
  • the richer node is kept as primary
  • if the secondary is clearly newer and contains new info, it can be treated as an update
  • fields are merged with dedupe and best-text heuristics
  • public visibility wins over private
  • notes are preserved under "## Merged Notes"
  • edges are rewired to the primary node

Run cleanup:

python tools/graph_cleanup.py

Dry run and undo:

python tools/graph_cleanup.py --dry-run
python tools/graph_cleanup.py --undo <merge_id>

To add new cleanup rules:

  1. Create tools/cleanup_rules/<rule>.py with a find_candidates function.
  2. Register it in tools/cleanup_rules/__init__.py.

Wiping the graph (destructive)

Use this when you want to reset the graph, inbox, and trees. Backups are stored in mind/cleanup/.

python tools/wipe_graph.py --yes

Dry run or hard delete:

python tools/wipe_graph.py --dry-run
python tools/wipe_graph.py --yes --hard-delete

Environment variables

Suggested defaults are in code; these are the main overrides:

  • OPENAI_API_KEY or LATTICEWORK_API_KEY
  • OPENAI_MODEL or LATTICEWORK_MODEL
  • OPENAI_BASE_URL or LATTICEWORK_BASE_URL
  • LATTICEWORK_PROVIDER (defaults to openai)
  • LATTICEWORK_TEMPERATURE (float)

.env is local-only. Do not commit secrets.


Build artifacts and visibility

tools/build_graph.py creates:

  • build/graph.json (full graph, private + public)
  • build/graph.public.json (only visibility: public)

Use the public graph for sharing; the full graph is for personal use.


Notebook workflow

latticework_pipeline.ipynb provides an end-to-end, notebook-friendly flow:

  1. cleanup the graph
  2. capture input
  3. generate suggestions
  4. apply suggestions
  5. build and render the graph

Use it when you want a visual, step-by-step process.


Where to read next

  • introduction.md: product vision and philosophy
  • technical_overview.md: full architecture and data formats
  • concepts.md: glossary for non-technical readers

About

This is a personal project where I hope to better understand how mental models work and how to use decision trees to turn my mental models into actionable worldly wisdom.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors