Performance benchmarks for csp using airspeed velocity (ASV).
This repository contains performance benchmarks for the csp library, designed to:
- Track performance over time across commits
- Detect performance regressions
- Compare different implementations and configurations
- Run on dedicated Hetzner Cloud machines for consistent results
- GraphExecutionSuite: Tests graph execution with varying node counts and tick rates
- NodeOverheadSuite: Measures node invocation overhead
- StatsBenchmarkSuite: Tests statistical functions (median, quantile, rank)
- StatsScalingSuite: Tests how stats scale with data size
- BaselibSuite: Tests built-in operations (filter, sample, delay, merge, flatten)
- CurveSuite: Tests historical data loading
- MathSuite: Tests arithmetic and comparison operations
- AccumulatorSuite: Tests accumulating operations (accum, count, diff)
# Install with development dependencies
pip install -e ".[develop]"
# For Hetzner Cloud integration
pip install -e ".[develop,hetzner]"After installing csp-benchmarks, you can run benchmarks locally against your installed csp version:
# List all available benchmark suites
csp-benchmarks list
# Run all benchmarks
csp-benchmarks run
# Run specific suite (core, baselib, math, stats)
csp-benchmarks run --suite core
# Run specific benchmark method
csp-benchmarks run --method linear_graph
# Quick mode (fewer parameter combinations)
csp-benchmarks run --quick
# Verbose output with min/max timing
csp-benchmarks run --suite baselib --verbose
# Custom number of runs per benchmark
csp-benchmarks run --runs 5CLI Options:
--suite, -s: Filter to specific suite (e.g., 'core', 'baselib')--method, -m: Filter to specific method name pattern--quick, -q: Quick mode with fewer parameter combinations--runs, -r: Number of runs per benchmark (default: 3)--verbose, -v: Show detailed timing info (min/max)
# Run quick benchmarks for the current commit
make benchmark-quick
# Run full benchmarks
make benchmark
# Run using local Python environment (no virtualenv)
make benchmark-local
# View results
make benchmark-view# Initialize machine configuration
python -m asv machine --yes
# Run benchmarks for current commit
python -m asv run HEAD^!
# Compare with previous commit
python -m asv compare HEAD~1 HEAD
# Generate and serve HTML report
python -m asv publish
python -m asv previewFor consistent benchmark results, this repository supports running benchmarks on dedicated Hetzner Cloud servers.
- Create a Hetzner Cloud API token at https://console.hetzner.cloud/
- Set the token as a repository secret:
HCLOUD_TOKEN - Generate an SSH key pair:
ssh-keygen -t ed25519 -f hetzner_key -N "" - Add the public key to Hetzner Cloud Console with name
benchmarks - Add the private key content as repository secret:
HETZNER_SSH_PRIVATE_KEY
# Set your Hetzner token
export HCLOUD_TOKEN="your-token-here"
# Run benchmarks on Hetzner (SSH key must already exist in Hetzner as 'benchmarks')
python -m csp_benchmarks.hetzner.cli run --ssh-key ~/.ssh/hetzner_key --ssh-key-name benchmarks --push
# Clean up any leftover servers
python -m csp_benchmarks.hetzner.cli cleanupBenchmarks run automatically:
- On push to main: Benchmarks for the new commit
- Manual trigger: Via workflow_dispatch with custom options
Benchmark results are stored in the results/ directory and published to GitHub Pages.
View the latest results at: https://csp-community.github.io/csp-benchmarks/
- Add new benchmarks to the
benchmarks/directory - Follow ASV naming conventions (
bench_*.py, class names ending inSuite) - Use parameterized benchmarks for testing across different configurations
- Run
make benchmark-localto test your benchmarks before submitting
You can contribute benchmark results from your own machine to help the community understand csp performance across different hardware configurations.
Add your machine to csp_benchmarks/asv-machine.json. Use a unique, descriptive name:
{
"timkpaine-framework-13": {
"arch": "x86_64",
"cpu": "AMD Ryzen AI 9 HX 370 (24 cores)",
"machine": "timkpaine-framework-13",
"num_cpu": "24",
"os": "Ubuntu 24.04",
"ram": "64GB"
}
}Machine naming convention: username-device-model (e.g., timkpaine-framework-13, johndoe-mbp-m3)
# Install dependencies
pip install -e ".[develop]"
# Copy machine config to ASV location
cp csp_benchmarks/asv-machine.json ~/.asv-machine.json
# Run benchmarks with your machine name
python -m asv run --config csp_benchmarks/asv.conf.json --machine your-machine-name
# Or use make (runs with local Python)
make benchmark-local# Check that results were created
ls csp_benchmarks/results/your-machine-name/
# Preview the results locally
make benchmark-view- Fork the repository
- Create a branch:
git checkout -b add-machine-results - Commit your changes:
csp_benchmarks/asv-machine.json(your machine entry)csp_benchmarks/results/your-machine-name/(your result files)
- Open a PR with a description of your hardware
- Close other applications during benchmarking
- Run on AC power (not battery) for laptops
- Ensure stable CPU frequency (disable turbo boost for more consistent results)
- Run multiple times and verify results are stable
- Include your OS version and Python version in the PR description