Free. Private. Offline. No API key required.
Security scanner for AI agent skills and MCP tool bundles. Part of the SkillScan project.
SkillScan Security catches the obvious stuff so you don't have to pay Claude to find it. It runs entirely on your machine — no network calls, no telemetry, no tokens spent — and returns deterministic verdicts before you ever send a skill to an online scanner.
Use it as a free pre-filter in your CI pipeline. If it blocks, you know immediately. If it passes, you've already eliminated the easy wins before handing off to a deeper (and more expensive) analysis layer.
Verdicts: allow · warn · block
Default policy: strict.
Online AI scanners (Invariant, Lakera Guard, and others) are excellent at nuanced intent analysis. They are also billed per token. Running them on every skill in a large repository is expensive.
SkillScan handles the deterministic layer for free:
- Download-and-execute chains
- Secret exfiltration patterns
- Credential harvesting instructions
- Malicious binary artifacts
- Known-bad IOC domains and IPs
- Vulnerable dependency versions
- Prompt injection and instruction override attempts
- Social engineering credential requests
If SkillScan blocks it, you don't need to spend tokens on it. If it passes, you have a clean bill of health on the obvious vectors before your paid scanner runs.
- Offline-first. No network calls required. Runs entirely on your machine.
- Archive-safe extraction and static analysis.
- Binary artifact classification and flagging (executables, libraries, bytecode, blobs).
- Malware and instruction-abuse pattern detection (70+ static rules, 15 chain rules).
- Instruction hardening pipeline (Unicode normalization, zero-width stripping, bounded base64 decode, action-chain checks).
- IOC extraction with local intel matching (163 domains, 1,310 IPs, 2 CIDRs — updated twice daily).
- Dependency vulnerability checks (23 Python + 4 npm packages via OSV.dev).
- Social engineering and credential-harvest instruction detection (SE-001, SE-SEM-001).
- Policy profiles (
strict,balanced,permissive) + custom policies. - Pretty terminal output + JSON / SARIF / JUnit / compact reports.
- Auto-refresh managed intel feeds (default checks every scan, 1-hour max age).
- Versioned YAML rulepack for flexible detection updates.
- Adversarial regression corpus with expected verdicts.
- Default-on local semantic prompt-injection classifier (NLTK/classical features, no external API).
- Optional offline ML detection (
--ml-detect) using a fine-tuned DeBERTa adapter — no API key, no cloud.
- PyPI:
pip install skillscan-security - Docker:
docker pull kurtpayne/skillscan-security - Pre-commit hook:
skillscan-security>=0.3.1
Release process: docs/RELEASE_CHECKLIST.md and docs/RELEASE_ONBOARDING.md.
SBOMs: Python CycloneDX (sbom-python.cdx.json) and Docker SPDX (sbom-docker.spdx.json) are included in release artifacts.
Docker default behavior: the image includes ClamAV and enables it by default (SKILLSCAN_CLAMAV=true). Override with --no-clamav.
curl -fsSL https://raw.githubusercontent.com/kurtpayne/skillscan/main/scripts/install.sh | bashpip install skillscan-securityBase install is ~25 MB. No torch, no transformers, no heavy ML stack. The --ml-detect flag requires an optional extra:
# CPU-only ONNX inference (~200 MB) — recommended for most users
pip install 'skillscan-security[ml-onnx]'
# Full PyTorch backend (~500 MB) — for GPU environments
pip install 'skillscan-security[ml]'python3 -m venv .venv
source .venv/bin/activate
pip install -e '.[dev]'skillscan scan ./examples/suspicious_skillScan directly from URL (including GitHub blob URLs):
skillscan scan "https://github.com/blader/humanizer/blob/main/SKILL.md?plain=1"Save reports:
# JSON
skillscan scan ./target --format json --out report.json --fail-on never
# SARIF (GitHub code scanning)
skillscan scan ./target --format sarif --out skillscan.sarif --fail-on never
# JUnit XML (CI test report ingestion)
skillscan scan ./target --format junit --out skillscan-junit.xml --fail-on never
# Compact (terse CI logs)
skillscan scan ./target --format compact --fail-on neverRender a saved report:
skillscan explain ./report.jsonOptional offline ML detection (requires [ml-onnx] or [ml] extra):
skillscan scan ./target --ml-detectThe ML detector uses a fine-tuned DeBERTa adapter. It runs entirely on your machine — no API calls, no tokens, no cloud. It is the right tool for subtle semantic attacks that the static rules don't catch. For nuanced intent analysis that requires reasoning about context, see the integration bridges below.
$ skillscan scan examples/showcase/01_download_execute --fail-on never
╭─────────────────────────────── Verdict: BLOCK ───────────────────────────────╮
│ Target: examples/showcase/01_download_execute │
│ Policy: strict │
│ Score: 360 │
│ Findings: 2 │
╰──────────────────────────────────────────────────────────────────────────────╯
Top Findings:
- MAL-001 (critical) Download-and-execute chain
- CHN-001 (critical) Dangerous action chain: download plus execute$ skillscan scan examples/showcase/15_secret_network_chain --fail-on never
╭─────────────────────────────── Verdict: BLOCK ───────────────────────────────╮
│ Target: examples/showcase/15_secret_network_chain │
│ Policy: strict │
│ Score: 285 │
│ Findings: 2 │
╰──────────────────────────────────────────────────────────────────────────────╯
Top Findings:
- EXF-001 (high) Sensitive credential file access
- CHN-002 (critical) Potential secret exfiltration chain$ skillscan scan examples/showcase/20_social_engineering_credential_harvest --fail-on never
╭─────────────────────────────── Verdict: BLOCK ───────────────────────────────╮
│ Target: examples/showcase/20_social_engineering_credential_harvest │
│ Policy: strict │
│ Score: 95 │
│ Findings: 2 │
╰──────────────────────────────────────────────────────────────────────────────╯
Top Findings:
- SE-001 (high) Social engineering credential harvest
- PINJ-SEM-001 (medium) Semantic prompt injection signal$ skillscan scan examples/showcase/21_npm_lifecycle_abuse --fail-on never
╭─────────────────────────────── Verdict: BLOCK ───────────────────────────────╮
│ Target: examples/showcase/21_npm_lifecycle_abuse │
│ Policy: strict │
│ Score: 465 │
│ Findings: 3 │
╰──────────────────────────────────────────────────────────────────────────────╯
Top Findings:
- MAL-001 (critical) Download-and-execute chain
- CHN-001 (critical) Dangerous action chain: download plus execute
- SUP-001 (high) Risky npm lifecycle script: preinstall$ skillscan scan examples/showcase/24_binary_artifact --fail-on never
╭─────────────────────────────── Verdict: WARN ────────────────────────────────╮
│ Target: examples/showcase/24_binary_artifact │
│ Policy: strict │
│ Score: 35 │
│ Findings: 1 │
╰──────────────────────────────────────────────────────────────────────────────╯
Top Findings:
- BIN-001 (high) Executable binary artifact presentskillscan scan <path>skillscan explain <report.json>skillscan policy show-default --profile strict|balanced|permissiveskillscan policy validate <policy.yaml>skillscan intel status|list|add|remove|enable|disable|rebuildskillscan intel sync [--force]skillscan rule list [--format json]skillscan uninstall [--keep-data]skillscan-security version
See full command docs: docs/COMMANDS.md.
Built-ins:
strict(default)balancedpermissive
Use a custom policy:
skillscan scan ./target --policy ./examples/policies/strict_custom.yamlAdd a local IOC source:
skillscan intel add ./examples/intel/custom_iocs.json --type ioc --name team-iocsView sources:
skillscan intel status
skillscan intel listManaged intel auto-refresh runs by default on scan. You can tune or disable it:
skillscan scan ./target --intel-max-age-minutes 60
skillscan scan ./target --no-auto-intel
skillscan intel sync --forceSkillScan is designed to be the free pre-filter in a layered scanning pipeline. It handles deterministic checks locally so you don't spend tokens on the obvious cases. For nuanced intent analysis, pair it with an online scanner.
Invariant Analyzer provides deep semantic analysis of agent traces and skill files. Run SkillScan first to eliminate clear-cut cases:
# Only send to Invariant if SkillScan doesn't block
skillscan scan ./skill --format json --out pre-filter.json --fail-on never
if [ "$(jq -r '.verdict' pre-filter.json)" != "block" ]; then
invariant analyze ./skill
fiOr in CI:
- name: SkillScan pre-filter
run: skillscan scan ./skills --format sarif --out skillscan.sarif
continue-on-error: true
- name: Upload SkillScan results
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: skillscan.sarif
- name: Deep scan (only if SkillScan passes)
if: steps.skillscan.outcome == 'success'
run: invariant analyze ./skillsLakera Guard provides real-time prompt injection detection via API. SkillScan catches the static patterns for free before you hit the API:
import subprocess, json, requests
result = subprocess.run(
["skillscan", "scan", skill_path, "--format", "json", "--fail-on", "never"],
capture_output=True, text=True
)
report = json.loads(result.stdout)
if report["verdict"] == "block":
# SkillScan caught it — no API call needed
raise ValueError(f"Skill blocked by SkillScan: {report['top_findings']}")
# SkillScan passed — send to Lakera for semantic analysis
response = requests.post(
"https://api.lakera.ai/v1/prompt_injection",
headers={"Authorization": f"Bearer {LAKERA_API_KEY}"},
json={"input": skill_content}
)# .pre-commit-config.yaml
repos:
- repo: https://github.com/kurtpayne/skillscan-security
rev: v0.3.1
hooks:
- id: skillscan
args: [--fail-on, warn]- Benign sample:
examples/benign_skill - Suspicious sample:
examples/suspicious_skill - OpenAI-style sample:
examples/openai_style_tool - Claude-style sample:
examples/claude_style_skill - Comprehensive detection showcase:
examples/showcase/INDEX.md - Social engineering sample:
examples/showcase/20_social_engineering_credential_harvest - OpenClaw-compromised-style sample:
tests/fixtures/malicious/openclaw_compromised_like
Starter bundles for OpenClaw/ClawHub, Claude-style skills, and OpenAI Actions are in:
integrations/openclaw/integrations/claude/integrations/openai/
See the Platform Bundles section of the Distribution Guide for setup and rollout guidance.
./scripts/run_tests.sh test
./scripts/run_tests.sh lint
./scripts/run_tests.sh type
./scripts/run_tests.sh checkOr via Makefile:
make checkSkillScan provides a reusable GitHub Actions workflow for scanning skill artifacts in CI pipelines with native SARIF upload to the GitHub Security tab.
jobs:
skillscan:
uses: kurtpayne/skillscan-security/.github/workflows/skillscan-scan.yml@main
with:
scan-path: ./skillsSee docs/GITHUB_ACTIONS.md for full documentation and examples.
skillscan uninstall
# Keep local data (intel/reports/config):
skillscan uninstall --keep-dataShell script uninstall: scripts/uninstall.sh.
- Detection model:
docs/DETECTION_MODEL.md - Scan overview:
docs/SCAN_OVERVIEW.md - Architecture:
docs/ARCHITECTURE.md - Threat model:
docs/THREAT_MODEL.md - Policy guide:
docs/POLICY.md - Intel guide:
docs/INTEL.md - Testing guide:
docs/TESTING.md - Rules and scoring:
docs/RULES.md - Comprehensive examples:
docs/EXAMPLES.md - GitHub Actions integration:
docs/GITHUB_ACTIONS.md - Distribution:
docs/DISTRIBUTION.md - Release onboarding:
docs/RELEASE_ONBOARDING.md - Release checklist:
docs/RELEASE_CHECKLIST.md - PRD:
docs/PRD.md
- skillscan-lint — Quality linter for AI agent skills: readability, clarity, graph integrity
- Invariant Analyzer — Deep semantic analysis of agent traces; use SkillScan as a free pre-filter
- Lakera Guard — Real-time prompt injection detection API; use SkillScan to eliminate static cases before hitting the API
- skills.sh — Community registry of AI agent skills
- ClawHub — MCP skill marketplace
- Docker Hub —
docker pull kurtpayne/skillscan-security - PyPI —
pip install skillscan-security
Licensed under Apache-2.0. See LICENSE.
SkillScan performs static analysis by default and does not execute scanned artifacts. For untrusted inputs, run in a trusted isolated environment.
For URL scans, unreadable linked sources are reported as low-severity SRC-READ-ERR findings. They are flagged for review but are not treated as malicious by default.