Know if a skill is safe before you use it.
Most AI skills are:
- opaque
- unverified
- unclear about risks
Agents are expected to install and run them anyway.
This creates:
- hidden security risks
- unknown external dependencies
- blind trust in third-party logic
Skill Vetter v2 evaluates a skill before you trust it.
It provides:
- structured risk classification
- capability analysis
- trust dependency evaluation
- clear safety verdicts
Every skill is evaluated across three dimensions:
- file writes
- package installs
- system changes
- external API calls
- data handling
- credential exposure
- reliance on external services
- transparency of those services
- ability to verify outputs
Every evaluation results in:
- safe β low risk
- caution β review before use
- unsafe β avoid
Without evaluation, using a skill is a leap of faith.
This system ensures:
- risks are visible
- trust is explicit
- decisions stay local
Works alongside:
- SettlementWitness β verifies outputs
- Capability Evolver β improves safely
- Humanizer β transforms outputs
- evaluating new skills before installation
- auditing third-party agent tools
- building safer autonomous systems
- enforcing trust boundaries
Add this repository as a Claude skill.
ai-agents
security
risk-analysis
trust
verification
Last updated: 2026-04-02