This directory contains examples that extend skill-validator for common workflows. They are not part of the tool itself; copy and adapt them to fit your project.
- review-skill — Use during local development to iterate on a skill with your coding agent before requesting a human review.
- ci — Use when publishing skills or adding them to a project to enforce a minimum quality bar on every pull request.
An Agent Skill that walks a coding agent through a full skill review: structural validation, content checks, and LLM-as-judge scoring. Use it during local development to iterate on a skill before publishing. With this skill, the coding agent can work with the skill author to improve the skill content before requesting a human review.
- Checks prerequisites (skill-validator binary, API keys)
- Runs
skill-validator checkfor structural validation - Reviews content for examples, edge cases, and scope-gating
- Optionally scores the skill with an LLM judge (Anthropic, OpenAI, any OpenAI-compatible endpoint, or the Claude CLI)
- Supports cross-model comparison to validate scores across model families
- Presents a summary with prioritized action items and a publish recommendation
- Copy the
review-skill/directory into your project's skill directory (or wherever your agent loads skills from). For Claude, for example, this is.claude/skills/. - Install the skill-validator tool. If it's not already installed, the skill contains install instructions to walk the agent through helping a skill author set up their environment.
- For LLM scoring, set the relevant API key:
- Anthropic:
export ANTHROPIC_API_KEY=sk-ant-... - OpenAI:
export OPENAI_API_KEY=sk-... - OpenAI-compatible:
export OPENAI_API_KEY=...(some endpoints accept a placeholder) and provide the--base-urlwhen prompted. - Claude CLI: No API key needed — uses the locally authenticated
claudebinary (e.g. via a company or team subscription). Note: scores may be less consistent than API-based providers because the CLI loads local context (CLAUDE.md, memory) into each call.
- Anthropic:
- Add
.score_cache/to your.gitignore. LLM scoring caches results inside each skill directory, and these should not be committed. - Ask your agent to review a skill. The skill stores configuration in
~/.config/skill-validator/review-state.yamlso subsequent runs skip prerequisite checks.
A GitHub Actions workflow and companion script that validate new or changed skills on every pull request. Use it to enforce a minimum quality bar before skills are merged. Use when publishing official skills for other people to use, or before adding skills to your own repo or personal coding agent setup.
- Detects which skill directories changed in a PR (via
git diff) - Runs
skill-validator check --stricton each changed skill - Writes a markdown report to the GitHub Actions job summary
- Emits inline PR annotations for errors and warnings
- Fails the workflow if any skill has errors or warnings (
--strictmode)
- Copy
.github/workflows/validate-skills.ymland.github/scripts/validate-skills.shinto your repository's.github/directory. - Edit the
SKILLS_DIRenv var in the workflow to match the directory where your skills live (defaults toskills). - Update the
pathsfilter underon.pull_requestto match the same directory. - Ensure the script is executable:
chmod +x .github/scripts/validate-skills.sh
The workflow installs skill-validator from source on each run. No API keys or external services are required; it runs structural validation only.