中文 | English
The "Safety Airbag" for AI Agents. 🛡️
Status: Beta (0.1.x) · Distribution: GitHub Releases (PyPI lagging) · License: Apache 2.0
When your agent breaks, you don't need better prompts — you need a circuit breaker.
FailCore is a fail-fast execution runtime for AI agents.
It does not try to make agents smarter — it makes them safe and reliable.
While frameworks like LangChain focus on planning, FailCore focuses on what happens during execution: enforcing permissions, blocking side-effects (network & filesystem), and generating forensic audit logs.
FailCore is actively developing an experimental proxy mode, distributed via GitHub pre-releases.
The proxy runs in front of LLM providers and transparently forwards requests while observing and tracing execution at runtime. It is streaming-aware and designed as a foundation for future execution-time enforcement and auditing.
Proxy mode is experimental and not production-ready. APIs and behaviors may change.
Cost-related features are under early development, focusing on traceability and provider compatibility. Expect changes as APIs evolve.
FailCore enforces security at tool invocation time —
before any network or filesystem side-effect occurs.
Demo: Tool-level SSRF protection with strict network policy and full execution trace.
FailCore automatically generates audit HTML reports for every run.
(Below: FailCore blocking a real-world path traversal attack generated by an LLM)
This audit report captures a failed execution, providing a structured timeline, incident analysis, and trace-backed evidence for post-incident inspection.
- 🛡️ SSRF Protection — Network-layer validation (DNS resolution and private IP checks).
- 📂 Filesystem Sandbox — Detects and blocks
../path traversal attacks. - 📊 Audit Reports — One-command generation of professional HTML dashboards.
- 🎯 Semantic Status — Clear distinction between
BLOCKED(threat neutralized) vsFAIL(tool error).
pip install failcoreNote: The PyPI package may lag behind the latest features. For the newest builds (including experimental proxy mode), use GitHub Releases.
failcore show
failcore report --last > report.htmlThe report provides a human-readable summary of execution results, highlighting blocked operations and failure points.
Modern AI agents are fragile. FailCore addresses core execution risks:
| Risk | Without FailCore | With FailCore |
|---|---|---|
| Security (SSRF) | Agent can access internal metadata services. | BLOCKED by network-layer validation. |
| Filesystem | Agent can read/write arbitrary files via ../. |
BLOCKED by strict sandbox enforcement. |
| Cost | One step fails, entire workflow restarts. | DETERMINISTIC REPLAY of successful steps. |
| Visibility | Thousands of log lines. | FORENSIC REPORT with clear verdicts. |
Contributions are welcome.
If you are building agent systems that need stronger execution guarantees, we would love your feedback.
Apache License 2.0 — see LICENSE.
Copyright © 2025 ZiLing


