AI cognitive pattern testing CLI. Simulates diverse user personas (with cognitive/motor profiles) against your web app to find UX issues before real users do.
Requires Nix with Flakes enabled.
# Enable Flakes (one-time setup)
mkdir -p ~/.config/nix
echo "experimental-features = nix-command flakes" >> ~/.config/nix/nix.conf
# Build and run
nix build github:Nozium/clawfooding
./result/bin/clawfooding --help
# Or install to your profile
nix profile install github:Nozium/clawfooding
clawfooding --helpFirst build:
npmDepsHashneeds to be computed. Runnix buildonce — it will fail and print the correctsha256-...hash. Replacelib.fakeHashinflake.nixwith that value, then rebuild.
Requires Node.js 22+ and pnpm 10.8.1+.
git clone https://github.com/Nozium/clawfooding.git
cd clawfooding
pnpm install
pnpm run build
node apps/clawfooding/dist/index.mjs --help# List available personas
clawfooding personas --list
# Inspect a persona's cognitive profile
clawfooding personas --inspect haruka
# Run a scenario in simulate mode (no API key needed)
clawfooding run --scenario examples/scenario-basic.yaml --simulate
# Start interactive Codex OAuth login
clawfooding --login codex
# Remote machine fallback (manual token paste)
clawfooding --login codex:device
# (tries importing from ~/.codex/auth.json first, then prompts if needed)
# Save credentials into .env (provider=credential)
clawfooding --login codex=<oauth-token>
clawfooding --login openai=<api-key>
clawfooding --login anthropic=<api-key>
# Run with a real LLM
export ANTHROPIC_API_KEY=sk-ant-...
clawfooding run --scenario examples/scenario-basic.yaml
# Override defender permission for a run (default is block)
clawfooding run --scenario examples/scenario-basic.yaml --defender-mode warn
# Benchmark across models
clawfooding bench --scenario examples/scenario-basic.yaml \
--models "anthropic/claude-sonnet-4-5,anthropic/claude-haiku-4-5"
# View billing report
clawfooding billing --dir .clawfooding/billing
clawfooding billing --dir .clawfooding/billing --group-by dailyCopy .env.example to .env:
cp .env.example .envKey variables:
| Variable | Description |
|---|---|
ANTHROPIC_API_KEY |
Anthropic API key (for Claude models) |
OPENAI_API_KEY |
OpenAI API key (for GPT models) |
OPENAI_OAUTH_TOKEN |
OpenAI/Codex OAuth token (ChatGPT subscription auth) |
OPENAI_BASE_URL |
Custom endpoint (Ollama, LM Studio, etc.) |
CLAWFOODING_DEFENDER_MODE |
Outbound guard mode: off, warn, block (default: block) |
CLAWFOODING_DEFAULT_MODEL |
Default model (e.g. anthropic/claude-sonnet-4-5) |
CLAWFOODING_TARGET_URL |
Target site URL |
Use --simulate to run without any API key.
ClawFooding automatically selects the best available model based on your API keys:
- OpenAI/Codex OAuth (
OPENAI_OAUTH_TOKEN) →openai/gpt-5.3-codex - OpenAI API Key (
OPENAI_API_KEY) →openai/gpt-4o - Anthropic API Key (
ANTHROPIC_API_KEY) →anthropic/claude-sonnet-4-5
You can always override this with --model:
# Auto-detected based on available API keys
clawfooding feelfree --url https://www.google.com --goal "AI trends"
# Explicitly specify model
clawfooding feelfree --url https://www.google.com --goal "AI trends" --model openai/gpt-4oAvailable Codex models (via OPENAI_OAUTH_TOKEN):
openai/gpt-5.3-codex— Latest frontier agentic coding model (default for Codex)openai/gpt-5.2-codex— Frontier agentic coding modelopenai/gpt-5.1-codex-max— Codex-optimized flagship for deep and fast reasoningopenai/gpt-5.2— Latest frontier model with improvements across knowledge, reasoning and codingopenai/gpt-5.1-codex-mini— Faster, smaller Codex model
Credentials saved via --login are written to .env in plaintext. This file is excluded from version control via .gitignore, but:
- Anyone with filesystem read access can read the keys directly.
- Prefer using environment variables (
export ANTHROPIC_API_KEY=...) in CI/CD pipelines rather than.envfiles. - For shared machines, set restrictive file permissions:
chmod 600 .env.
The --defender-mode flag (default: block) guards LLM outputs against credential leakage and prompt injection. Do not set --defender-mode off in production or shared environments. The warn mode logs findings without blocking—useful for debugging, not for production runs.
The optional openclaw-defender npm package can be added for enhanced protection beyond the built-in regex rules:
pnpm add openclaw-defenderWhen present, it is loaded automatically and takes precedence over the built-in detector.
The feelfree command launches a headless Chromium browser via Playwright to perform live Google searches. Be aware of the following risks:
- Google Terms of Service: Automated scraping of Google Search results may violate Google's ToS. Use
--simulatefor development and testing to avoid live requests. - Fragile DOM selectors: Google changes its HTML structure frequently. The
div#search .gselectors used for result extraction may break without notice. If no results are returned, check the selectors inapps/clawfooding/src/commands/feelfree.ts. - Playwright is an optional dependency: The Nix package does NOT include Playwright. Use
--simulatemode when running vianix profile install. For real browser mode, use the development environment:pnpm install && bun apps/clawfooding/src/index.ts feelfree ... - Network requests: Live mode sends real HTTP requests to Google from your IP address. Rate-limited or blocked IPs will return empty results.
Billing JSONL logs stored in .clawfooding/billing/ contain persona IDs, step descriptions, token counts, and timestamps but no LLM response content. LLM response text is processed in memory and filtered by the defender before any output is shown. Logs are safe to retain for cost tracking.
When using clawfooding personas --soul <name> to generate SOUL.md files for OpenClaw, the output contains persona behavioral profiles. These files do not contain credentials or secrets and are safe to commit.
# Enter Nix dev shell (installs pnpm, bun, etc.)
nix develop
# Or use pnpm directly
pnpm install
pnpm run build
pnpm run test
pnpm typecheckSee CLAUDE.md for detailed development conventions.