A code review checklist + structured prompt that forces a review to be:
- operationally realistic (on-call mindset),
- security-aware (trust boundaries + data flow),
- test-driven (prove promises),
- deployable (observe + rollback).
- Article: (https://www.theneuron.ai/explainer-articles/10-vibe-coding-questions-beginners-dont-know-to-ask-but-should/)
- Repo: (https://github.com/cnoles1980/Vibe-Coder-Code-Review/)
Given a diff/PR description and minimal context, the reviewer outputs a structured review that answers:
- Explain the diff like I’m the on-call engineer.
- Assumptions (tested vs untested).
- Trust boundaries + threat model.
- Endpoints/jobs/handlers + authn/authz coverage.
- Untrusted input → sensitive sinks tracing.
- Sensitive data inventory + where it goes (incl. logs).
- Failure behavior (timeouts, retries, idempotency, degradation).
- Tests that prove core promises (happy + edges + “evil input”).
- Maintainability debt (duplication, deps, style drift).
- Safe deploy/observe/rollback plan.
Open skill.yaml, paste it into your tool’s skill registry, and call it with:
- PR title/description
- diff (or link + pasted snippets)
- relevant code pointers (routes, handlers, jobs, config)
- runtime context (cloud, DB, queues, auth provider, etc.)
Use prompt.md as your review prompt. Paste in the diff and context.
MIT