Skip to content

awrshift/skill-awrshift

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Claude Code Codex CLI Gemini CLI License MIT

AWRSHIFT v2

Adaptive decision framework for AI agents. User checkpoints at every phase.

Works with any AI coding assistant that supports Agent Skills


Why

AI agents jump to implementation. Most failures come from building the wrong thing. AWRSHIFT makes your agent research, define metrics, factcheck, and test in a sandbox — all before touching your main project. One dynamic flow adapts to any task complexity.

AWRSHIFT v2 Flow

What's New in v2

v1.0 v2.0
3 modes (Quick/Standard/Scientific) 1 dynamic flow — scope adapts per phase
Text-based questions AskUserQuestion — structured A/B/C/D choices at every checkpoint
No metrics phase EVALUATE-DESIGN — mandatory success criteria before planning
Factcheck only in Scientific FACTCHECK — mandatory for all scopes
No sandbox rules 10 safety rules — experiment sandbox is isolated from main project
No implementation gate Double gate — DECIDE(GO) + file-by-file preview before touching main project

How It Works

One flow. User controls depth.

IDENTIFY → RESEARCH → EVALUATE-DESIGN → HYPOTHESIZE → PLAN → FACTCHECK → TEST → DECIDE → [IMPLEMENT]

At every phase transition, the agent:

  1. Tells you what was done
  2. Explains what happens next
  3. Asks you to choose (A/B/C/D or your own direction)

You're always in control. The agent never proceeds silently.

Quick Install

Claude Code:

mkdir -p .claude/skills/awrshift
curl -sL https://raw.githubusercontent.com/awrshift/skill-awrshift/main/SKILL.md \
  -o .claude/skills/awrshift/SKILL.md

Usage

You say AWRSHIFT does
"Let's think this through" Starts IDENTIFY — asks structured questions one by one
"Research first" RESEARCH phase — generates questions, asks you to validate, dispatches agents
"Compare approaches" HYPOTHESIZE — names options, presents comparison table
"What metrics should we use?" EVALUATE-DESIGN — proposes measurable success criteria
"Factcheck this plan" FACTCHECK — verifies plan against original context + optional Gemini
"Experiment on [topic]" Creates experiment folder, starts full flow

Experiment Structure

Every experiment creates persistent documentation in your project:

experiments/{NNN}-{short-name}/
├── PLAN.md              ← Status, phases, metrics, decisions
├── research/
│   └── 01-{topic}.md    ← Agent findings
├── factcheck.md         ← Verification results
└── [artifacts]           ← Code, configs, outputs

Sandbox Safety

During experiments, the agent NEVER modifies your main project files:

  • All work happens in experiments/ folder
  • Main project files are read-only (for context)
  • Only after DECIDE(GO) + your explicit approval → changes proposed to main project
  • You see exact file list before any modification

Integration

Skill When Purpose
brainstorm HYPOTHESIZE phase Multi-model ideation (Claude x Gemini)
gemini FACTCHECK phase Cross-model verification

Both optional. Framework works standalone.

Key Principles

  1. User-in-the-loop — AskUserQuestion at every phase transition
  2. Metrics before planning — define success criteria before building
  3. Factcheck before testing — verify plan against evidence
  4. Sandbox first — test in experiments/, implement later
  5. Evidence-based decisions — GO/NO-GO with measured metrics

Works With

Platform Install
Claude Code Copy SKILL.md to .claude/skills/awrshift/
Codex CLI Copy SKILL.md to .openai/skills/awrshift/
Gemini CLI Copy SKILL.md to .gemini/skills/awrshift/
Cursor Copy SKILL.md to .cursor/skills/awrshift/

Part of the AWRSHIFT Ecosystem

License

MIT — see LICENSE for details.


Think before you build. Research before you code. Decide with evidence.

About

Adaptive decision framework for AI agents. One dynamic flow, user checkpoints at every phase, sandbox-safe experiments.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors