Skip to content

nutstrut/skill-vetter-v2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Skill Vetter v2

Know if a skill is safe before you use it.


🧠 The Problem

Most AI skills are:

  • opaque
  • unverified
  • unclear about risks

Agents are expected to install and run them anyway.

This creates:

  • hidden security risks
  • unknown external dependencies
  • blind trust in third-party logic

βœ… The Solution

Skill Vetter v2 evaluates a skill before you trust it.

It provides:

  • structured risk classification
  • capability analysis
  • trust dependency evaluation
  • clear safety verdicts

πŸ” What It Analyzes

Every skill is evaluated across three dimensions:

1. Install Risk

  • file writes
  • package installs
  • system changes

2. Runtime Behavior

  • external API calls
  • data handling
  • credential exposure

3. Trust Dependencies

  • reliance on external services
  • transparency of those services
  • ability to verify outputs

βš–οΈ Clear Verdicts

Every evaluation results in:

  • safe β†’ low risk
  • caution β†’ review before use
  • unsafe β†’ avoid

πŸ”’ Why This Matters

Without evaluation, using a skill is a leap of faith.

This system ensures:

  • risks are visible
  • trust is explicit
  • decisions stay local

🧩 Part of a Trust Stack

Works alongside:

  • SettlementWitness β†’ verifies outputs
  • Capability Evolver β†’ improves safely
  • Humanizer β†’ transforms outputs

πŸš€ Use Cases

  • evaluating new skills before installation
  • auditing third-party agent tools
  • building safer autonomous systems
  • enforcing trust boundaries

πŸ“¦ Installation

Add this repository as a Claude skill.


🏷️ Tags

ai-agents
security
risk-analysis
trust
verification

Metadata

Last updated: 2026-04-02

About

Analyze and classify agent skills for safety using structured local risk evaluation.

Topics

Resources

Stars

Watchers

Forks

Contributors