LoadForge is a DSL and runtime for describing and executing API tests under load.
The goal is to avoid per-test scripting and instead declare everything in a .lf file:
- environment and target
- authentication
- scenarios and response validations
- load profile
- SLO/metrics checks
In classic load testing setups, logic is spread across scripts, helpers, and config files. That usually leads to:
- slower test authoring and updates
- lower readability
- harder reviews and reuse
- weak traceability between "what we test" and "how runtime executes it"
LoadForge introduces a single DSL for test intent, while the runtime handles execution, measurement, and reporting.
- DSL (
.lf) is parsed with TextX grammar (src/loadforge/grammar/loadforge.tx). - Parse output is mapped to typed Python models (
src/loadforge/model). - Preprocessors normalize values (for example HTTP methods and JSON check kind to enum values).
- Runtime prepares context (env + vars), optionally runs auth login, then executes scenarios.
- Every request is measured and recorded (latency, status, success/error).
- Runtime computes aggregates (p50/p95/p99, errorRate, rps) and evaluates metric thresholds.
- If stopped early, runtime still prints partial metrics collected so far.
Core blocks:
testenvironmenttargetauth loginvariablesscenario(request,expect status,expect json)loadmetrics
auth login supports two execution modes:
- Shared token auth (no
fileflag): one login request is executed, and the same Bearer token is reused by all virtual users. - Per-user auth (
fileflag present): each virtual user authenticates individually using credentials from an external.ulf(User List File) provided via the CLI.
The file keyword in the auth block is a boolean flag — it signals that a .ulf file is required. The actual path to the .ulf file is provided as a CLI argument (the same way .env is provided):
loadforge test.lf --env .env --userlist users.ulf
.ulf (User List File) format uses a simple username : password syntax, one entry per line, parsed by textX:
[email protected] : secret123
[email protected] : hunter2
[email protected] : pa$$w0rd
.ulf mode details:
- The
fileflag inauth login {}declares that the test requires a.ulffile. - The
.ulffile path is provided via--userlist;.envis provided via--env. - Use
--infoto print JSON metadata for a.lffile, includingenv,userlist, andname. - Auth
bodyfields use${username}and${password}interpolation from.ulfentries. - If
load.usersis greater than the number of.ulfentries, users are assigned in round-robin order.
environment reads system variables via env("KEY"):
environment {
baseUrl = env("BASE_URL")
token = env("TOKEN")
}
variables defines local DSL variables:
variables {
q = "phone"
pageSize = "20"
}
References use #name and can point to env values or variables:
target #baseUrl
variables {
authHeader = #token
}
String interpolation is supported with ${name}:
request GET "/catalog/search?q=${q}&limit=${pageSize}"
Context resolution order:
- load
environmentfirst - then resolve
variables(variables can reference env values) - runtime builds a unified context; duplicate names are not allowed
test "Functional Test Demo" {
environment {
baseUrl = env("BASE_URL")
}
target #baseUrl
scenario "fetch index" {
request GET "/"
expect status 200
}
}
test "Catalog search - steady load" {
environment {
baseUrl = env("BASE_URL")
}
target #baseUrl
scenario "search" {
request GET "/catalog/search?q=phone"
expect status 200
expect json $.results isArray
}
load {
users 50
rampUp 30s
duration 5m
}
metrics {
p95 < 250ms
errorRate < 1%
}
}
expect json uses JSONPath and supports:
isArraynotEmptyisEmptyequals <value|#ref>hasSize <number>isNullnotNullisObjectisStringisNumberisBoolcontains <value|#ref>matches <regex|#ref>
Example:
scenario "json checks" {
request GET "/users/1"
expect status 200
expect json $.id notNull
expect json $.name isString
expect json $.tags isArray
expect json $.tags notEmpty
expect json $.email matches "[^@]+@[^@]+"
}
Parser returns a TestFile model with nested dataclass objects.
Conceptually:
TestFile
-> Test(name, environment, target, auth, variables, scenarios, load, metrics)
-> Scenario(name, steps=[Request | ExpectStatus | ExpectJson, ...])
-> Load(users, ramp_up, duration)
-> MetricsBlock(checks=[MetricExpectation, ...])
Runtime flow:
parse_fileloads DSL and creates model.run_testresolves env/vars/target.- If
auth loginexists:- shared mode authenticates once before load and reuses one Bearer token;
.ulfmode authenticates each virtual user with credentials from an external.ulffile.
run_load_test_asyncstarts virtual users with ramp-up behavior.- Every
requestis sent viahttpxandasyncio, latency is measured, and data is recorded intoMetricsCollector. expectsteps validate the last response; failures mark the last request as failed.- Runtime builds final report (
LoadTestResult) with throughput, latency, errors, and per-scenario stats.
In load mode, CLI renders a single updating line with elapsed time, active users, request count, throughput, and errors:
⠹ 6.0s / 30s │ Users: 10/10 │ Reqs: 666 │ Req/s: 107.3 │ Errors: 0
This line refreshes in place until the test ends or is stopped.
After execution, CLI prints a report like:
LoadForge Load Test Report
Test: Load + Metrics Demo
Duration: 10.0s | Users: 10 | Ramp-up: 3s
Throughput:
Total requests: 10,545
Requests/sec: 1053.7
Latency (ms):
Min: 0.8 Avg: 8.0 p50: 3.9
p95: 12.9 p99: 18.0 Max: 2959.9
Errors:
Error rate: 0.0% (0/10,545)
Per-scenario breakdown:
fetch index reqs: 10,545 rps: 1053.7 p95: 12.9ms err: 0.0%
Metric thresholds: PASS
Result: PASS
If stopped early, report includes Stopped early and still shows partial metrics collected so far.
- For setup, commands, testing, and debugging see development.md.
- DSL examples are in
examples/(seeexamples/README.md). - Use
examples/.env.exampleas a template forexamples/.env.
There is a VS Code extension for this project that provides:
- syntax highlighting for
.lffiles - bundled executable/runtime integration
- running tests directly from VS Code
Extension link: link