- EXECUTE all rules and requirements in task/skill files - no exceptions
- COMPLETE all entry and exit conditions for every task
- STOP at gates - proceed only when conditions are met
Complete before any other operation:
- Execute
datecommand → Store as SESSION_BASELINE_DATE - Apply
.agents/skills/metacognition/SKILL.md→ Keep active entire session - Use SESSION_BASELINE_DATE for all date references (WebSearch, docs, etc.)
- Verify project structure with
ls -la
Universal Entry Point: Every request starts with task-analysis.md to determine the appropriate path.
Early Exit Check (BEFORE loading task-analysis.md): Does this task require reading code files to decide HOW to execute it?
- NO → Execute directly, skip task-analysis.md
- YES → Load and follow task-analysis.md
If not early exit, start here for any user request:
- Apply
.agents/tasks/task-analysis.md - Follow its output to select the appropriate path:
Small Scale (1-2 files) / Single Task:
- Load specific task definition (e.g., implementation.md, technical-design.md)
- Execute that task definition directly
- No workflow needed
Medium/Large Scale (3+ files) / Complex Task:
- Follow task-analysis.md recommendation for workflow selection
All tasks require Plan Injection for BLOCKING READs:
- Task-analysis.md Step 8 scans and identifies ALL BLOCKING READ requirements
- Work plans MUST contain every BLOCKING READ from workflow/tasks/skills
- Each phase verifies its BLOCKING READs are in the plan
- Gates verify Plan Injection evidence before proceeding
- Missing ANY BLOCKING READ = IMMEDIATE HALT
Task definitions define WHAT to build - never skip them:
- Verify entry gates before proceeding
- Follow Required Skills section in each task definition
Apply skills based on task type from task-analysis:
- Skills are loaded progressively as needed
- Each task definition specifies its required skills
- Unload task-specific skills after completion
Before marking any task complete:
- All tests pass (when applicable)
- All quality checks return 0 errors
- Task exit conditions are satisfied
- Work documented as needed
Principle: Get user approval at significant milestones.
Common approval points:
- When recommending a workflow for Medium/Large tasks
- After creating design or decision documents
- When technical approach changes significantly
- At task definition specified stop points
VIOLATIONS TO PREVENT:
- Work plan without ALL BLOCKING READs = RETURN TO TASK ANALYSIS
- Skipping ANY BLOCKING READ = IMMEDIATE HALT
- Proceeding without task definition compliance = BLOCKING ERROR
Universal quality requirements:
- Follow TDD process for all code changes
- All quality checks must pass with 0 errors
- Follow standards defined in language-specific skills
- Each task definition specifies its quality gates
Perform self-assessment at these mandatory points:
- Task type changes
- Unexpected errors occur
- Completing a meaningful unit of work
- Before starting new implementation
- After completing each task from work plan
Guidelines:
- Load skills progressively, not all at once
- Unload task-specific skills after completion
- Keep only frequently-used skills loaded
- If context feels constrained, ask user for cleanup guidance
When stuck or encountering errors:
- Re-read current task definition
- Check if required skills are loaded
- Look for anti-patterns in ai-development-guide skill
- If unable to resolve, ask user for clarification
Tasks (.agents/tasks/):
- task-analysis.md: Entry point
- work-planning.md: Create work plans
- technical-design.md: Design documentation
- acceptance-test-generation.md: Test skeleton generation
- implementation.md: Implementation guidelines
- quality-assurance.md: Quality standards
Workflows (.agents/workflows/):
- agentic-coding.md: Medium/Large scale workflow
Context Maps (.agents/context-maps/):
- task-skills-matrix.yaml: Task-to-skill mappings
- Skipping task-analysis.md → ALWAYS start with task analysis
- Loading all skills upfront → Load progressively based on task needs
- Ignoring task entry/exit conditions → Verify gates at each step
- Working without task definitions → Task definitions define WHAT to build
- Assuming workflow is always needed → Small tasks can use direct task definitions
- Premature workflow selection → Let task-analysis determine the approach
Track internally:
- Task completion rate
- Skills actually used vs loaded
- Quality checks passing rate (should be 100%)
- Appropriate path selection (direct vs workflow)