Standalone Python brandpipe pipeline with:
- candidate generation
- LLM ideation (OpenRouter, OpenAI-compatible local runtimes, hybrid)
- shortlist validation with browser-backed web/TMView/App Store rechecks
- exclusion memory (SQLite) to avoid re-validating eliminated names
This repository expects secrets to be loaded via .envrc:
OPENROUTER_API_KEYOPENROUTER_HTTP_REFEREROPENROUTER_X_TITLE
Load and use env like this:
direnv allow .
direnv exec . env | rg OPENROUTERImportant: run commands that need remote access via direnv exec . <command>.
The recurring branding automations currently run from dedicated Codex worktrees. Do not remove these paths during routine worktree cleanup:
~/.codex/worktrees/automation-branding-fusion/brandname-generator~/.codex/worktrees/automation-branding-health/brandname-generator
Current automation mapping:
branding-fusion-run(generation lane):automation-branding-fusionbranding-fusion-run-2(fusion lane):automation-branding-fusioncreative-run-check(validation lane):automation-branding-health
If you need to reclaim them, pause or reconfigure the automations first.
Use Python 3.11+ and a local virtual environment:
python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
python -m pip install -r requirements-dev.txt
python -m playwright install chromiumNotes:
requirements.txtis intentionally small and only covers optional capabilities currently used bybrandpipe:playwrightfor EUIPO/Swissreg browser probeswordfreqfor corpus analysis utilities and future frequency-based tuning
- Core candidate generation/validation scripts are standard-library based.
- single-config runs:
direnv exec . env PYTHONPATH=src python3 -m brandpipe.cli run --config <toml> - shortlist validation:
direnv exec . env PYTHONPATH=src python3 -m brandpipe.cli validate --input-csv <csv> --out-dir <label_root>
direnv exec . env PYTHONPATH=src python3 -m brandpipe.cli run \
--config resources/brandpipe/fixture_basic_run.tomldirenv exec . env PYTHONPATH=src python3 -m brandpipe.cli run \
--config resources/brandpipe/fixture_basic_run.tomlAssumes LM Studio local server is running at http://127.0.0.1:1234/v1.
direnv exec . env PYTHONPATH=src python3 -m brandpipe.cli run \
--config resources/brandpipe/lmstudio_runic_forge_smoke.tomlOptional provider warm-cache probe:
python3 scripts/brandpipe/local_llm_warm_cache_probe.py \
--provider=openai_compat \
--base-url=http://127.0.0.1:1234/v1 \
--model=llama-3.3-8b-instruct-omniwriter \
--ttl-s=3600 \
--keep-alive=30m \
--runs=5 \
--gap-s=1direnv exec . env PYTHONPATH=src python3 -m brandpipe.cli validate \
--input-csv <review_csv> \
--mode keep_maybe \
--out-dir test_outputs/brandpipe/validate/manual \
--web-browser-profile-dir test_outputs/brandpipe/validate/playwright-profile \
--tmview-profile-dir test_outputs/brandpipe/validate/playwright-profileCanonical prompt location:
- Family prompts used by the supported generation flow:
resources/brandpipe/prompts/*.txt
How to wire a prompt into generation:
- Single-config brandpipe runs: set
[ideation].prompt_template_filein the TOML config you run. - Family-surface generation uses the built-in templates under
resources/brandpipe/prompts/.
Recommendation:
- Keep one prompt file per active family.
- Do not keep versioned prompt forks in the main tree unless they are the current supported prompt.
- Keep run outputs isolated per variant (
--out-dir) so comparisons stay clean.
Brandpipe output contract:
test_outputs/brandpipe/run/<config_slug>/<invocation_id>/: direct single-config pipeline runstest_outputs/brandpipe/validate/<label>/<invocation_id>/: validator bundles with the same contract
For a new market or brand line, keep the flow simple and stay inside the brandpipe surfaces:
- create a dedicated run config under
resources/brandpipe/ - if ideation mix changes, copy an existing TOML under
resources/brandpipe/and adjust[ideation]there - keep outputs isolated under the right bucket:
test_outputs/brandpipe/run/<brand>_<market>/<invocation_id>/ortest_outputs/brandpipe/validate/<brand>_<market>/<invocation_id>/ - validate the reviewed shortlist with
brandpipe.cli validate - keep the flow inside the brandpipe CLI surfaces
Suggested pattern:
resources/brandpipe/<brand>_<market>.toml- optional custom prompt file referenced by
[ideation].prompt_template_file test_outputs/brandpipe/run/<brand>_<market>/...test_outputs/brandpipe/validate/<brand>_<market>/...
- Detailed supported runbook:
docs/brandpipe/run_guide.md
- Validation workflow explanation:
docs/brandpipe/validation_workflow.md
- Single-config CLI help:
direnv exec . env PYTHONPATH=src python3 -m brandpipe.cli --help
- Validation help:
direnv exec . env PYTHONPATH=src python3 -m brandpipe.cli validate --help
- Brandpipe docs index:
docs/brandpipe/README.md
- Active configs, prompts, and fixtures:
resources/brandpipe/
- Active helper scripts:
scripts/brandpipe/
- Historical legacy artifacts:
artifacts/branding/legacy/