-
Notifications
You must be signed in to change notification settings - Fork 217
Quick Start Guide
This page gets you from zero to your first generated diagram in under 5 minutes.
pip install paperbananaOr from source:
git clone https://github.com/llmsresearch/paperbanana.git
cd paperbanana
pip install -e ".[google]"See Installation for more details. It's very easy to set up.
PaperBanana ships with a sample input. Run this to verify everything works:
paperbanana generate \
--input examples/sample_inputs/transformer_method.txt \
--caption "Overview of our encoder-decoder architecture with sparse routing"Output saves to outputs/run_<timestamp>/final_output.png. The folder also contains intermediate iterations and metadata.
Create a text file with your methodology section:
cat > my_method.txt << 'EOF'
Our framework consists of an encoder that processes input sequences
through multi-head self-attention layers, followed by a decoder that
generates output tokens auto-regressively using cross-attention to
the encoder representations. We add a novel routing mechanism that
selects relevant encoder states for each decoder step.
EOFGenerate:
paperbanana generate \
--input my_method.txt \
--caption "Overview of our encoder-decoder framework"Be specific in your methodology text. The more detail about components, connections, and data flow, the better the diagram. Vague descriptions like "we process the input" give the pipeline little to work with.
Write a descriptive caption. The caption tells the pipeline what the diagram should communicate. "System architecture showing the three-stage pipeline with feedback loop" is better than "Overview of our method."
Check intermediate iterations. Sometimes iteration 2 looks better than the final iteration 3. All versions are saved in the output directory.
Re-run if needed. Generation is non-deterministic. A second run on the same input sometimes produces meaningfully different (and sometimes better) results.
If you have data in CSV or JSON format:
paperbanana plot \
--data results.csv \
--intent "Bar chart comparing model accuracy across benchmarks"- CLI Reference for all available flags and commands
- Python API for programmatic usage
- MCP Server Setup for IDE integration
- Configuration for customizing pipeline behavior