Multi-reviewer code review using LLMs. Spawns parallel agents with different models/prompts, aggregates their feedback into a final verdict. Supports two modes — parallel aggregation and actor-critic debate — across two task types: code review and free-form questions.
Each reviewer is an agentic loop that can call tools (read files, grep, glob, git commands) to explore the repo before writing its review. A separate aggregator model deduplicates and synthesizes the individual reviews into a final verdict.
- Rust toolchain
- A git repository to review
- At least one configured LLM (API key or Gemini OAuth)
cargo install --git https://github.com/arsenyinfo/nitpickerexport ANTHROPIC_API_KEY="your-api-key-here"nitpicker
nitpicker --repo /path/to/repo
nitpicker --repo /path/to/repo --prompt "focus on src/api/"
nitpicker --analyze src/components/
nitpicker --analyze # entire reponitpicker --no-debate
nitpicker --no-debate --analyze src/
nitpicker --no-debate --max-turns 40nitpicker pr
nitpicker pr https://github.com/owner/repo/pull/42
nitpicker pr --no-comment
nitpicker pr https://github.com/owner/repo/pull/42 --no-commentnitpicker ask "should we use eyre or thiserror for error handling?"
nitpicker ask --no-debate "is this authentication flow secure?"
nitpicker ask --rounds 3 "should we split this module?"
nitpicker ask --max-turns 40 "should we split this module?"Configuration is loaded from (first match wins):
--config <path>(explicit flag)nitpicker.tomlin repo root~/.nitpicker/config.toml(global config)
# create a config in current directory
nitpicker init
# create a global config at ~/.nitpicker/config.toml
nitpicker init --globalExample nitpicker.toml:
[defaults]
debate = true # optional, default: true
max_turns = 70 # optional, default: 70
[aggregator]
model = "claude-sonnet-4-6"
provider = "anthropic"
max_tokens = 8192 # optional, default: 8192
[[reviewer]]
name = "claude" # used in output headers and logs
model = "claude-sonnet-4-6"
provider = "anthropic"
[[reviewer]]
name = "gpt"
model = "gpt-5.2-codex"
provider = "openai_compatible"
base_url = "https://api.openai.com/v1"
api_key_env = "OPENAI_API_KEY"Tip: Use providers that were not used for the initial building of your codebase to enforce diversity of thought.
Unknown config keys are rejected. For example, use max_tokens for output length; token_limit is not a supported field.
Debate mode is enabled by default for nitpicker, nitpicker ask, and nitpicker pr. Pass --no-debate to use parallel aggregation for a single run. Use [defaults].max_turns or --max-turns to control the per-agent tool-use loop limit.
provider |
Auth | Required fields |
|---|---|---|
anthropic |
ANTHROPIC_API_KEY env var |
— |
gemini |
GEMINI_API_KEY env var, or auth = "oauth" |
— |
anthropic_compatible |
env var named by api_key_env |
base_url, api_key_env |
openai_compatible |
env var named by api_key_env |
base_url, api_key_env |
Gemini can be used via Google Code Assist OAuth (for free or with subscription, limits apply) — no API key needed, just a Google account. This approach mimics the auth of Gemini CLI, so no guarantees on reliability.
[aggregator]
model = "gemini-3-flash-preview"
provider = "gemini"
auth = "oauth"
[[reviewer]]
name = "gemini"
model = "gemini-3.1-pro-preview"
provider = "gemini"
auth = "oauth"Authenticate once before reviewing:
nitpicker --gemini-oauthThis opens a browser, completes the OAuth flow, and saves the token to ~/.nitpicker/gemini-token.json. The token is refreshed automatically on subsequent runs.
nitpicker [OPTIONS]
nitpicker ask [--no-debate] [--rounds N] [--max-turns N] [OPTIONS] <topic>
nitpicker pr [URL] [--no-comment] [--no-debate] [--rounds N] [--max-turns N] [OPTIONS]
nitpicker init [--global]
--repo <PATH> git repository to review [default: .]
--config <PATH> config file [default: <repo>/nitpicker.toml, then ~/.nitpicker/config.toml]
--prompt <TEXT> review instructions (optional, has a sensible default)
--analyze [PATH] analyze existing code instead of reviewing changes
--no-debate use parallel aggregation instead of actor-critic debate
--rounds <N> maximum debate rounds [default: 5]
--max-turns <N> maximum tool-use turns per agent or debate turn [default: 70 via config]
--gemini-oauth run Gemini OAuth authentication flow and exit
-v, --verbose show info-level logs (hidden by default)
nitpicker pr [URL] [--no-comment] [--no-debate] [--rounds N] [--max-turns N] [--prompt TEXT] [--repo .] [--config PATH] [-v]
Reviews a GitHub PR using its title, description, and diff. Requires the gh CLI (gh auth login to authenticate).
- Without
URL: reviews the current branch's open PR (must be run inside the repo) - With
URL(https://github.com/owner/repo/pull/N): clones the repo into a temp dir, checks out the PR branch, reviews it, then cleans up - By default, posts the review as a PR comment. Pass
--no-commentto skip posting. --no-debate,--rounds, and--max-turnswork the same as in the default review mode
nitpicker ask [--no-debate] [--rounds N] [--max-turns N] [--repo .] [--config PATH] [-v] <topic>
Runs agents on a free-form question instead of a code diff. By default, two agents take turns as Actor/Critic before a meta-reviewer concludes. Pass --no-debate to switch to the parallel reviewer plus aggregator flow.
Two LLM agents take turns exploring the codebase with file/git tools and submitting verdicts. The Critic can signal agreement (agree=true) to end early. A meta-reviewer synthesizes the dialogue.
reviewer[0]in config → Actor (review: Reviewer)reviewer[1]in config → Critic (review: Validator)aggregator→ Meta-reviewer
Transcript saved to {tempdir}/debate-{timestamp}.md or review-debate-{timestamp}.md.