From 62478aab3a97600ec0a63b2397d582a7b035cdf4 Mon Sep 17 00:00:00 2001 From: huangmingxia Date: Wed, 22 Oct 2025 06:05:16 +0800 Subject: [PATCH] AI-assisted test generation --- .claude/README.md | 426 ++++++++++++++++++ .claude/STARTUP_CHECKLIST.md | 230 ++++++++++ .claude/WORKFLOW_ORCHESTRATOR_GUIDE.md | 263 +++++++++++ .claude/commands/README.md | 135 ++++++ .claude/commands/full-workflow.md | 156 +++++++ .claude/commands/generate-e2e-case.md | 171 +++++++ .claude/commands/generate-report.md | 159 +++++++ .claude/commands/generate-test-case.md | 168 +++++++ .claude/commands/regenerate-e2e.md | 182 ++++++++ .claude/commands/regenerate-test.md | 177 ++++++++ .claude/commands/run-tests.md | 169 +++++++ .claude/commands/submit-pr.md | 160 +++++++ .../e2e_rules/e2e_test_case_guidelines.md | 5 + .../e2e_test_case_guidelines_test_private.md | 242 ++++++++++ .claude/config/rules/pr_submission_rules.md | 11 + .../test_case_generation_rules_hive.md | 175 +++++++ .../unified_test_generation_rules.md | 249 ++++++++++ .../config/templates/test_cases_template.md | 42 ++ .../test_coverage_matrix_template.md | 14 + .../config/templates/test_report_template.md | 131 ++++++ .gitignore | 4 + 21 files changed, 3269 insertions(+) create mode 100644 .claude/README.md create mode 100644 .claude/STARTUP_CHECKLIST.md create mode 100644 .claude/WORKFLOW_ORCHESTRATOR_GUIDE.md create mode 100644 .claude/commands/README.md create mode 100644 .claude/commands/full-workflow.md create mode 100644 .claude/commands/generate-e2e-case.md create mode 100644 .claude/commands/generate-report.md create mode 100644 .claude/commands/generate-test-case.md create mode 100644 .claude/commands/regenerate-e2e.md create mode 100644 .claude/commands/regenerate-test.md create mode 100644 .claude/commands/run-tests.md create mode 100644 .claude/commands/submit-pr.md create mode 100644 .claude/config/rules/e2e_rules/e2e_test_case_guidelines.md create mode 100644 .claude/config/rules/e2e_rules/e2e_test_case_guidelines_test_private.md create mode 100644 .claude/config/rules/pr_submission_rules.md create mode 100644 .claude/config/rules/test_case_rules/test_case_generation_rules_hive.md create mode 100644 .claude/config/rules/test_case_rules/unified_test_generation_rules.md create mode 100644 .claude/config/templates/test_cases_template.md create mode 100644 .claude/config/templates/test_coverage_matrix_template.md create mode 100644 .claude/config/templates/test_report_template.md diff --git a/.claude/README.md b/.claude/README.md new file mode 100644 index 00000000000..76984be57b8 --- /dev/null +++ b/.claude/README.md @@ -0,0 +1,426 @@ +# Hive Test Generation System + +AI-powered test generation system for OpenShift Hive component, automating the creation of comprehensive E2E and manual test cases from JIRA issues. + +## Overview + +This system provides slash commands to automate the complete test generation workflow: + +1. **Test Case Generation** - Generate test cases from JIRA issues +2. **E2E Code Generation** - Transform test cases into executable test code +3. **Test Execution** - Run tests with auto-fix capabilities +4. **Report Generation** - Create comprehensive test reports +5. **PR Submission** - Submit test code as pull requests + +## Quick Start + +### Complete Workflow (Recommended) + +```bash +/full-workflow HIVE-2883 +``` + +This executes the complete workflow: test case generation → E2E code generation → test execution → report generation + +### Step-by-Step Workflow + +```bash +# Step 1: Generate test cases +/generate-test-case HIVE-2883 + +# Step 2: Generate E2E code +/generate-e2e-case HIVE-2883 + +# Step 3: Run tests +/run-tests HIVE-2883 + +# Step 4: Generate report (optional) +/generate-report HIVE-2883 + +# Step 5: Submit PR +/submit-pr HIVE-2883 +``` + +## Available Commands + +| Command | Description | Duration | +|---------|-------------|----------| +| `/generate-test-case` | Generate test cases from JIRA | ~90s | +| `/generate-e2e-case` | Generate E2E test code | ~120s | +| `/run-tests` | Execute E2E tests | ~180s | +| `/generate-report` | Generate comprehensive report | ~45s | +| `/submit-pr` | Create pull request | ~30s | +| `/full-workflow` | Execute complete workflow | ~3-4min | +| `/regenerate-test` | Force regenerate test case | ~90s | +| `/regenerate-e2e` | Force regenerate E2E code | ~120s | + +## Command Details + +### `/generate-test-case JIRA_KEY` + +Generates comprehensive test cases from JIRA issues including: +- Test requirements analysis +- Test strategy and coverage matrix +- E2E and manual test cases (separated by type) + +**Output:** +``` +.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/ +├── phases/ +│ ├── test_requirements_output.md +│ └── test_strategy.md +├── test_cases/ +│ ├── {JIRA_KEY}_e2e_test_case.md +│ └── {JIRA_KEY}_manual_test_case.md +└── test_coverage_matrix.md +``` + +### `/generate-e2e-case JIRA_KEY` + +Generates executable E2E test code for all detected platforms: +- Automatic platform detection (AWS, Azure, GCP, etc.) +- Parallel code generation for multiple platforms +- Quality checks and validation + +**Output:** +``` +.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/openshift-tests-private/ +└── test/extended/{component}/ + └── {platform}.go (updated) +``` + +### `/run-tests JIRA_KEY` + +Executes E2E tests with intelligent features: +- Auto-fix for E2E code/config issues +- Product bug vs. E2E bug classification +- Coverage matrix updates +- Comprehensive test reports + +**Output:** +``` +.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/test_execution_results/ +├── {JIRA_KEY}_test_execution_log.txt +└── {JIRA_KEY}_comprehensive_test_results.md +``` + +### `/generate-report JIRA_KEY` + +Creates comprehensive test report consolidating: +- Test requirements and strategy +- Test cases (E2E and manual) +- Coverage matrix with statistics +- Execution results and bug classification + +**Output:** +``` +.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/test_report.md +``` + +### `/submit-pr JIRA_KEY` + +Submits E2E test code as pull request: +- Follows official PR template +- Includes test execution logs +- Applies `/hold` status +- Generates submission report + +**Output:** +- PR URL: `https://github.com/openshift/openshift-tests-private/pull/{PR_NUMBER}` + +### `/full-workflow JIRA_KEY` + +Executes complete workflow automatically: +- Orchestrates all agents in sequence +- Validates prerequisites +- Handles errors and recovery +- Generates workflow execution report + +### `/regenerate-test JIRA_KEY` + +Force regenerate test cases: +- Skips prerequisite checks +- Overwrites existing files +- Useful for fixing issues or updating requirements + +### `/regenerate-e2e JIRA_KEY` + +Force regenerate E2E code: +- Skips prerequisite checks +- Overwrites existing files +- Useful for fixing code issues or updating tests + +## Prerequisites + +### Required Tools +- **JIRA MCP** - Configured and accessible for JIRA data retrieval +- **GitHub CLI (`gh`)** - Installed and authenticated +- **OpenShift CLI (`oc`)** - For test execution with cluster access +- **Git** - For repository operations + +### Environment Setup +- KUBECONFIG set to valid OpenShift cluster +- Fork of `openshift-tests-private` configured +- Component rules exist in `.claude/config/rules/test_case_rules/` + +## Configuration + +### Directory Structure + +``` +.claude/ +├── README.md # This file +├── commands/ # Slash command implementations +│ ├── generate-test-case.md +│ ├── generate-e2e-case.md +│ ├── run-tests.md +│ ├── generate-report.md +│ ├── submit-pr.md +│ ├── full-workflow.md +│ ├── regenerate-test.md +│ ├── regenerate-e2e.md +│ └── README.md +└── config/ + ├── rules/ # Test generation rules + │ ├── test_case_rules/ # Test case generation rules + │ ├── e2e_rules/ # E2E code generation rules + │ └── pr_submission_rules.md + └── templates/ # Output templates + ├── test_cases_template.yaml + ├── test_coverage_matrix_template.md + └── test_report_template.md +``` + +### Rules and Templates + +- **Test Case Rules**: `.claude/config/rules/test_case_rules/` + - `unified_test_generation_rules.md` - Common rules for all components + - `test_case_generation_rules_{component}.md` - Component-specific rules + +- **E2E Rules**: `.claude/config/rules/e2e_rules/` + - `e2e_test_case_guidelines_test_private.md` - E2E code generation guidelines + +- **Templates**: `.claude/config/templates/` + - Define output format and structure + - Ensure consistency across generated artifacts + +## Output Artifacts + +All artifacts are generated in `.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/`: + +### Generated Files + +1. **Test Requirements** - `phases/test_requirements_output.md` + - Component name and JIRA summary + - Test requirements and scope + - Affected platforms and edge cases + +2. **Test Strategy** - `phases/test_strategy.md` + - Test coverage matrix + - Test scenarios and validation methods + +3. **Test Coverage Matrix** - `test_coverage_matrix.md` + - Scenario-based coverage table + - Test type classification (E2E/Manual) + - Execution status tracking + +4. **E2E Test Cases** - `test_cases/{JIRA_KEY}_e2e_test_case.md` + - Automated executable test cases + - Platform-specific scenarios + +5. **Manual Test Cases** - `test_cases/{JIRA_KEY}_manual_test_case.md` + - Manual test scenarios + - Setup and validation steps + +6. **Test Execution Results** - `test_execution_results/{JIRA_KEY}_comprehensive_test_results.md` + - Test execution summary + - Bug classification (product vs. E2E) + - Auto-fix attempts and results + +7. **Test Report** - `test_report.md` + - Consolidated report of all artifacts + - Risk assessment and recommendations + +8. **Workflow Report** - `workflow_execution_report.md` + - Workflow execution summary + - Agent status and outputs + +## Workflow Examples + +### Example 1: New Feature Testing + +```bash +# Complete workflow for new feature HIVE-2883 +/full-workflow HIVE-2883 +``` + +**Result:** +- Test cases generated with platform coverage +- E2E code created for AWS, Azure, GCP +- Tests executed with auto-fix +- Comprehensive report generated + +### Example 2: Update Existing Tests + +```bash +# Regenerate test cases after requirements change +/regenerate-test HIVE-2883 + +# Regenerate E2E code +/regenerate-e2e HIVE-2883 + +# Run updated tests +/run-tests HIVE-2883 +``` + +### Example 3: Manual Step-by-Step + +```bash +# Generate test cases +/generate-test-case HIVE-2923 + +# Review and edit test cases manually +# ... make edits ... + +# Generate E2E code +/generate-e2e-case HIVE-2923 + +# Review generated code +# ... review code ... + +# Run tests +/run-tests HIVE-2923 + +# Generate final report +/generate-report HIVE-2923 + +# Submit PR +/submit-pr HIVE-2923 +``` + +## Best Practices + +### Test Case Generation +- Review JIRA issue for completeness before generation +- Verify component-specific rules exist +- Check test coverage matrix for gaps + +### E2E Code Generation +- Ensure test cases are finalized before code generation +- Review generated code for platform coverage +- Validate code compiles before execution + +### Test Execution +- Verify cluster connectivity before running tests +- Monitor auto-fix attempts for patterns +- Review bug classification accuracy + +### PR Submission +- Ensure tests pass before submitting +- Review PR body for completeness +- Remove `/hold` after review + +## Troubleshooting + +### Common Issues + +**Issue: JIRA MCP not accessible** +```bash +# Verify MCP configuration +# Check JIRA credentials +# Fallback to web fetch if needed +``` + +**Issue: Test execution fails** +```bash +# Check cluster connectivity: oc cluster-info +# Verify KUBECONFIG is set +# Review auto-fix suggestions +``` + +**Issue: Code generation produces errors** +```bash +# Regenerate test cases: /regenerate-test JIRA_KEY +# Verify test case format +# Check platform detection +``` + +**Issue: PR creation fails** +```bash +# Verify gh authentication: gh auth status +# Check fork configuration +# Review test execution logs exist +``` + +### Error Recovery + +1. **Workflow Failure**: Review partial execution report +2. **Agent Failure**: Check specific agent error message +3. **Prerequisite Missing**: Run prerequisite command first +4. **Regenerate Mode**: Use `/regenerate-*` commands to force updates + +## Performance Optimization + +### Parallel Execution +- Rules and templates loaded in parallel +- Platform code generation runs concurrently +- Independent validation checks parallelized + +### Efficiency Tips +- Use `/full-workflow` for end-to-end automation +- Review artifacts before regeneration +- Clean up old artifacts periodically + +## Advanced Usage + +### Custom Component Rules + +Create component-specific rules in `.claude/config/rules/test_case_rules/`: + +```markdown +# test_case_generation_rules_{component}.md + +## E2E Test Classification +- Scenario types that should be E2E +- Platform requirements +- Component-specific patterns + +## Manual Test Classification +- Scenarios requiring manual setup +- Edge cases for manual validation +``` + +### Custom Templates + +Modify templates in `.claude/config/templates/` to customize output format: +- Test case structure +- Coverage matrix format +- Report sections and content + +## Support and Documentation + +### Command Help +- Each command file in `commands/` contains detailed documentation +- Run command to see inline help and examples +- Check `commands/README.md` for reference + +### Configuration Help +- Review rule files for guidelines +- Check template files for format requirements +- See workflow orchestrator for agent coordination + +## Contributing + +When adding new commands or agents: +1. Create command file in `commands/` +2. Include frontmatter with metadata +3. Document implementation steps +4. Add examples and error handling +5. Update this README + +## Version + +**Version:** 1.0.0 +**Last Updated:** 2025-01-21 +**Component:** Hive Test Generation System + diff --git a/.claude/STARTUP_CHECKLIST.md b/.claude/STARTUP_CHECKLIST.md new file mode 100644 index 00000000000..eade9add993 --- /dev/null +++ b/.claude/STARTUP_CHECKLIST.md @@ -0,0 +1,230 @@ +# Startup Checklist + +## CRITICAL ENFORCEMENT - NO EXCEPTIONS + +**ABSOLUTE REQUIREMENT: You MUST execute EVERY SINGLE STEP in agent YAML configurations. NO SKIPPING, NO COMBINING, NO SIMPLIFYING.** + +## EXECUTION EFFICIENCY REQUIREMENTS (MANDATORY) + +**CRITICAL: Must execute with maximum efficiency and minimal verbosity** + +**OUTPUT FORMAT ENFORCEMENT**: +- Each step must have explicit output: `✅ Step X completed: [result]` +- NEVER skip step numbers (Step 1 → Step 2 → Step 3 → Step 4) +- Silent thinking execution (no verbose explanations), but clear step completion markers +- NO process descriptions or step-by-step commentary + +**VIOLATION ENFORCEMENT**: If providing ANY explanatory text or verbose process descriptions, this is an EFFICIENCY VIOLATION and execution must restart with direct execution only. + +## FRESH START ENFORCEMENT + +**CRITICAL RULE: Every user request is a fresh execution, even if it happens in the SAME conversation.** + +### What MUST Be Re-read Each Request: +- ✅ **MUST always re-read workflow_orchestrator.yaml** (user may have modified it) +- ✅ **MUST always re-read the agent YAML config** (user may have modified it) +- ✅ **MUST always re-read rule files if agent requires them** (rules may have changed) + +### What CAN Be Reused Across Requests: +- ✅ **Generated artifacts** (test_artifacts/*) - unless user says "regenerate/re-create" +- ✅ **E2E test code** (temp_repos/*) - unless user says "regenerate/re-create" +- ✅ **Prerequisite validation** - orchestrator will check if artifacts exist + +### Key Principle: +- **Configuration** = Always re-read (may have changed) +- **Artifacts** = Reuse if exists (unless regenerate mode) +- **Agent execution** = Follow orchestrator's prerequisite validation logic +- Each user request = enforce mandatory startup protocol + +## MANDATORY Steps + +### Before Processing User Request - USE WORKFLOW ORCHESTRATOR +- [ ] **MANDATORY: ALWAYS start by reading workflow_orchestrator.yaml**: Read `config/agents/workflow_orchestrator.yaml` +- [ ] **Let orchestrator identify workflow automatically**: Use workflow definitions and trigger keywords +- [ ] **Orchestrator will automatically**: + - Extract JIRA issue key from user request + - Match user request against workflow trigger keywords + - Detect REGENERATE mode ("re-create", "recreate", "re-generate", "regenerate") + - Validate prerequisites before execution + - Execute agents in correct sequence + - Generate workflow execution report + +### Manual Agent Identification (If Orchestrator Cannot Match) +If workflow_orchestrator cannot identify a workflow, fallback to manual identification: +- [ ] Identify request type and agent to execute: + - "Create test case for JIRA-XXX" → `test_case_generation` agent + - "Generate E2E code/test for JIRA-XXX" → `e2e_test_generation_openshift_private` agent + - "Run E2E tests for JIRA-XXX" → `test-executor` agent + - "Create E2E PR for JIRA-XXX" → `pr-submitter` agent + +### For ALL Agent Executions (Critical Rules) +- [ ] **NEVER use Task tool for agent workflows** +- [ ] **ALWAYS read agent YAML config directly using Read tool** +- [ ] **Execute EACH step in agent config manually using available tools** +- [ ] **Verify each step's output before proceeding to next step** +- [ ] **Follow agent config instructions exactly - no skipping or simplifying** + +### CRITICAL: Rule Keyword Enforcement (MANDATORY) +**When reading any rule files, configuration files, or guideline documents:** +- [ ] **MANDATORY keywords = ABSOLUTE REQUIREMENT** - Must be executed without exception +- [ ] **NEVER keywords = ABSOLUTE PROHIBITION** - Must never be violated under any circumstances +- [ ] **CRITICAL keywords = STOP and verify** - Must pause and verify compliance before proceeding +- [ ] **FORBIDDEN keywords = ABSOLUTE PROHIBITION** - Same as NEVER, must not be violated +- [ ] If ANY of these keywords are violated → **STOP immediately and restart from beginning** + +**Enforcement Rules:** +- When encountering "MANDATORY: Do X" → X must be done, no alternatives +- When encountering "NEVER do Y" → Y is absolutely prohibited, no exceptions +- When encountering "CRITICAL: Check Z" → Must verify Z before any further action +- When encountering "FORBIDDEN: Action W" → Action W is absolutely prohibited +- Violation of any keyword rule = **Complete execution failure, must restart** + +## STRICT EXECUTION PROTOCOL + +### Step-by-Step Execution Requirements +1. **Read agent YAML config directly** - Use Read tool, never Task tool +2. **Execute steps manually** - Follow config steps exactly, verify each step +3. **Use correct directory structure** - test_artifacts/{COMPONENT}/{JIRA_KEY}/ +4. **Parallel execution mandatory** - Execute independent tool calls in parallel within same message + +### MANDATORY VERIFICATION FOR EACH STEP +- [ ] Execute step exactly as written in YAML config +- [ ] Verify step output/result +- [ ] Log completion message: `✅ Step X completed: [brief result]` +- [ ] Show step output/result (concise format) +- [ ] Confirm step meets success criteria +- [ ] Only then proceed to next step + +### VIOLATION DETECTION +- [ ] If ANY step is skipped → STOP and restart from beginning +- [ ] If steps are combined → STOP and restart from beginning +- [ ] If step verification is missing → STOP and restart from beginning +- [ ] If different commands are used → STOP and restart from beginning +- [ ] If verbose explanations provided → EFFICIENCY VIOLATION, restart + +## Execution Output Format + +### MANDATORY OUTPUT FORMAT FOR ALL AGENTS: +``` +Step 1: Reading agent config +✅ Step 1 completed: Agent config loaded from config/agents/.yaml + +Step 2: [Action from YAML step 1] +✅ Step 2 completed: [Brief result] + +Step 3: [Action from YAML step 2] +✅ Step 3 completed: [Brief result] + +[Continue for ALL steps in the YAML config...] +``` + +### EFFICIENCY RULES: +- **60-70% fewer tokens than verbose mode** +- **No process descriptions** - only step completions +- **No explanatory text** - only execution results +- **Brief result summaries** - 1-2 sentences maximum per step +- **Clear step numbers** - never skip sequence + +## Agent Execution Examples (Using Workflow Orchestrator) + +### Example 1: Complete Flow (Recommended Approach) +``` +User: "Generate test cases and run them for HIVE-2883" + +Step 1: Read workflow_orchestrator.yaml +✅ Step 1 completed: Orchestrator config loaded + +Step 2: Parse user request +✅ Step 2 completed: JIRA key=HIVE-2883, Workflow=full_flow, Regenerate=false + +Step 3: Validate prerequisites +✅ Step 3 completed: All prerequisites validated + +Step 4: Execute workflow agents + 4.1: Executing test_case_generation agent + ✅ Agent completed: 4 files generated + 4.2: Executing e2e_test_generation_openshift_private agent + ✅ Agent completed: E2E code integrated + 4.3: Executing test-executor agent + ✅ Agent completed: Tests executed + +Step 5: Generate workflow report +✅ Workflow completed: full_flow for HIVE-2883 (3 agents executed) +``` + +### Example 2: Single Agent via Orchestrator +``` +User: "Create test case for HIVE-2883" + +Step 1: Read workflow_orchestrator.yaml +✅ Step 1 completed: Orchestrator config loaded + +Step 2: Parse user request +✅ Step 2 completed: Workflow identified=test_case_only + +Step 3: Execute workflow +✅ Agent test_case_generation completed + +Output: 4 files in test_artifacts/hive/HIVE-2883/ +``` + +### Example 3: Regenerate Mode via Orchestrator +``` +User: "re-create e2e test for HIVE-2883" + +Step 1: Read workflow_orchestrator.yaml +✅ Step 1 completed: Orchestrator config loaded + +Step 2: Parse user request +✅ Step 2 completed: REGENERATE mode detected, Workflow=e2e_generation + +Step 3: Validate prerequisites +⚠️ REGENERATE mode - skipping prerequisite checks + +Step 4: Execute workflow +✅ Agent e2e_test_generation_openshift_private completed (overwrite mode) +``` + +### Example 4: E2E Test Generation (Shows Sub-Agent Orchestration) +``` +User: "Generate E2E code for HIVE-2883" + +Step 1: Read workflow_orchestrator.yaml +✅ Workflow identified: e2e_generation + +Step 2: Execute e2e_test_generation_openshift_private agent + → This agent is also an orchestrator, executing 4 sub-agents: + 2.1: e2e_validation_agent + 2.2: repository_setup_agent + 2.3: e2e_code_generation_agent + 2.4: e2e_quality_check_agent +✅ E2E generation workflow completed +``` + +### Example 5: PR Submission via Orchestrator +``` +User: "Create PR for HIVE-2883" + +Step 1: Read workflow_orchestrator.yaml +✅ Workflow identified: pr_submission + +Step 2: Validate prerequisites +✅ E2E code exists in openshift-tests-private + +Step 3: Execute pr-submitter agent +✅ PR created and submitted + +Output: GitHub PR link +``` + +### Example 6: Manual Fallback (Orchestrator Cannot Match) +``` +User: "Do something custom with HIVE-2883" + +Step 1: Read workflow_orchestrator.yaml +⚠️ No workflow matched - falling back to manual identification + +Step 2: Manual agent identification +→ AI determines appropriate agent based on context +→ Execute selected agent directly +``` \ No newline at end of file diff --git a/.claude/WORKFLOW_ORCHESTRATOR_GUIDE.md b/.claude/WORKFLOW_ORCHESTRATOR_GUIDE.md new file mode 100644 index 00000000000..6cd7619b6d8 --- /dev/null +++ b/.claude/WORKFLOW_ORCHESTRATOR_GUIDE.md @@ -0,0 +1,263 @@ +# Workflow Orchestrator Usage Guide + +## Overview + +The **Workflow Orchestrator** (`workflow_orchestrator.yaml`) is a top-level agent that automatically identifies user intent and orchestrates multi-agent workflows. It eliminates the need for manual workflow identification and ensures correct agent execution order. + +## Key Benefits + +✅ **Automatic Intent Recognition** - Parses user requests and identifies workflows +✅ **Dependency Management** - Validates prerequisites before execution +✅ **REGENERATE Mode Support** - Automatically handles re-creation scenarios +✅ **Error Handling** - Unified error handling across all workflows +✅ **Execution Reports** - Generates comprehensive workflow execution reports + +--- + +## Supported Workflows + +### 1. **full_flow** - Complete Test Generation and Execution +**Trigger Keywords**: "generate test cases and run", "complete flow", "full test generation" + +**Agents Executed**: +1. `test_case_generation` - Generate Polarion test cases +2. `e2e_test_generation_openshift_private` - Generate E2E test code +3. `test-executor` - Execute E2E tests + +**Example**: +``` +User: "Generate test cases and run them for HIVE-2883" +``` + +--- + +### 2. **test_case_only** - Test Case Generation Only +**Trigger Keywords**: "create test case", "generate test case", "test case for" + +**Agents Executed**: +1. `test_case_generation` - Generate Polarion test cases + +**Example**: +``` +User: "Create test case for HIVE-2883" +``` + +--- + +### 3. **e2e_generation** - E2E Test Generation +**Trigger Keywords**: "generate e2e", "create e2e test", "e2e code for" + +**Agents Executed**: +1. `e2e_test_generation_openshift_private` - Generate E2E test code (orchestrator) + - Sub-agents: validation → setup → generation → quality check + +**Example**: +``` +User: "Generate E2E code for HIVE-2883" +``` + +--- + +### 4. **test_execution** - Test Execution Only +**Trigger Keywords**: "run e2e tests", "execute tests", "run tests for" + +**Agents Executed**: +1. `test-executor` - Execute E2E tests and generate reports + +**Example**: +``` +User: "Run E2E tests for HIVE-2883" +``` + +--- + +### 5. **pr_submission** - PR Submission +**Trigger Keywords**: "create pr", "submit pr", "create pull request" + +**Agents Executed**: +1. `pr-submitter` - Submit PR to openshift-tests-private + +**Example**: +``` +User: "Create PR for HIVE-2883" +``` + +--- + +### 6. **e2e_and_run** - E2E Generation and Execution +**Trigger Keywords**: "generate e2e and run", "create e2e and execute" + +**Agents Executed**: +1. `e2e_test_generation_openshift_private` - Generate E2E test code +2. `test-executor` - Execute E2E tests + +**Example**: +``` +User: "Generate E2E and run for HIVE-2883" +``` + +--- + +## REGENERATE Mode + +### Detection Keywords +- `"re-create"` / `"recreate"` +- `"re-generate"` / `"regenerate"` + +### Behavior +When REGENERATE mode is detected: +- ✅ Skip prerequisite checks for agents with `skip_on_regenerate: true` +- ✅ Overwrite existing files without confirmation +- ✅ Force re-execution even if output already exists + +### Examples +```bash +# Regenerate test case (always regenerates) +"re-create test case for HIVE-2883" + +# Regenerate E2E test (skips test case prerequisite check) +"recreate e2e test for HIVE-2883" + +# Normal generation (checks prerequisites) +"generate e2e test for HIVE-2883" # ← Will fail if test case doesn't exist +``` + +--- + +## How It Works + +### Execution Flow + +``` +┌─────────────────────────────────────┐ +│ 1. Parse User Request │ +│ - Extract JIRA key │ +│ - Detect REGENERATE mode │ +│ - Convert to lowercase │ +└─────────────────────────────────────┘ + ↓ +┌─────────────────────────────────────┐ +│ 2. Identify Workflow │ +│ - Match trigger keywords │ +│ - Select best matching workflow │ +└─────────────────────────────────────┘ + ↓ +┌─────────────────────────────────────┐ +│ 3. Validate Prerequisites │ +│ - Check required files exist │ +│ - Skip if REGENERATE mode │ +└─────────────────────────────────────┘ + ↓ +┌─────────────────────────────────────┐ +│ 4. Execute Agents in Sequence │ +│ - Read agent YAML config │ +│ - Execute all agent steps │ +│ - Verify completion │ +└─────────────────────────────────────┘ + ↓ +┌─────────────────────────────────────┐ +│ 5. Generate Execution Report │ +│ - List executed agents │ +│ - List output files │ +│ - Calculate execution time │ +└─────────────────────────────────────┘ +``` + +--- + +## Prerequisite Checks + +Each workflow validates prerequisites before execution: + +| Workflow | Agent | Prerequisite File/Directory | +|----------|-------|----------------------------| +| **full_flow** | e2e_test_generation | `test_artifacts/{COMPONENT}/{JIRA_KEY}/test_cases/{JIRA_KEY}_test_case.md` | +| **full_flow** | test-executor | `temp_repos/openshift-tests-private/test/extended/cluster_operator/{COMPONENT}/` | +| **e2e_generation** | e2e_test_generation | `test_artifacts/{COMPONENT}/{JIRA_KEY}/test_cases/{JIRA_KEY}_test_case.md` | +| **test_execution** | test-executor | `temp_repos/openshift-tests-private/test/extended/cluster_operator/{COMPONENT}/` | +| **pr_submission** | pr-submitter | `temp_repos/openshift-tests-private/test/extended/cluster_operator/{COMPONENT}/` | + +**Note**: Test case generation has no prerequisites and can always run. + +--- + +## Output Files + +The orchestrator generates a comprehensive execution report: + +**File**: `test_artifacts/{COMPONENT}/{JIRA_KEY}/workflow_execution_report.md` + +**Contents**: +- Workflow name and description +- JIRA issue key +- REGENERATE mode status +- List of agents executed with status +- All generated output files +- Total execution time +- Any errors encountered + +--- + +## Adding New Workflows + +To add a new workflow, edit `config/agents/workflow_orchestrator.yaml`: + +```yaml +workflows: + your_new_workflow: + name: "Your Workflow Name" + description: "What this workflow does" + trigger_keywords: + - "keyword 1" + - "keyword 2" + agents: + - name: agent_name + config: "config/agents/agent_name.yaml" + prerequisite: previous_agent_name # or null + prerequisite_check: "path/to/check" + skip_on_regenerate: true # or false + description: "What this agent does" +``` + +--- + +## Manual Fallback + +If the orchestrator cannot match a workflow (no keywords match), it will: +1. Log a warning: `⚠️ No workflow matched - falling back to manual identification` +2. Allow AI to manually identify the appropriate agent +3. Execute the selected agent directly + +This ensures the system remains flexible for edge cases. + +--- + +## Best Practices + +1. **Always use the orchestrator** - Start every request by reading `workflow_orchestrator.yaml` +2. **Use clear trigger keywords** - Help the orchestrator identify the correct workflow +3. **Include JIRA key** - Always include the JIRA issue key in the request +4. **Use REGENERATE keywords** - Explicitly use "re-create" or "regenerate" when needed +5. **Check execution reports** - Review the generated workflow execution report + +--- + +## Troubleshooting + +### Problem: Workflow not matched +**Solution**: Use more specific trigger keywords from the workflow definitions + +### Problem: Prerequisite check fails +**Solution**: Either run the prerequisite workflow first, or use REGENERATE mode to skip checks + +### Problem: Agent execution fails +**Solution**: Check the workflow execution report for error details and recovery suggestions + +--- + +## Integration with STARTUP_CHECKLIST.md + +The orchestrator is now the **primary entry point** for all agent executions. See `STARTUP_CHECKLIST.md` for detailed execution protocols and examples. + +**Key Rule**: Always read `config/agents/workflow_orchestrator.yaml` before processing any user request. + diff --git a/.claude/commands/README.md b/.claude/commands/README.md new file mode 100644 index 00000000000..0a9ba29511a --- /dev/null +++ b/.claude/commands/README.md @@ -0,0 +1,135 @@ +# Slash Commands + +## Command Reference + +All commands follow the format: `/command-name JIRA_KEY` + +### Core Workflow Commands + +| Command | Description | Duration | +|---------|-------------|----------| +| `/generate-test-case` | Generate test cases from JIRA | ~90s | +| `/generate-e2e-case` | Generate E2E test code | ~120s | +| `/run-tests` | Execute E2E tests | ~180s | +| `/generate-report` | Generate comprehensive report | ~45s | +| `/submit-pr` | Create pull request | ~30s | +| `/full-workflow` | Execute complete workflow | ~3-4min | + +### Regeneration Commands + +| Command | Description | +|---------|-------------| +| `/regenerate-test` | Force regenerate test case | +| `/regenerate-e2e` | Force regenerate E2E code | + +## Command Flow + +``` +/generate-test-case → /generate-e2e-case → /run-tests → /submit-pr + (90s) (120s) (180s) (30s) + ↓ + /generate-report + (45s) + +Complete workflow: /full-workflow (combines first 3 steps) +``` + +## Quick Start + +### Complete Workflow (Recommended) +```bash +/full-workflow HIVE-2883 +``` + +### Step-by-Step Workflow +```bash +# Step 1: Generate test cases +/generate-test-case HIVE-2883 + +# Step 2: Generate E2E code +/generate-e2e-case HIVE-2883 + +# Step 3: Run tests +/run-tests HIVE-2883 + +# Step 4: Generate report (optional) +/generate-report HIVE-2883 + +# Step 5: Submit PR +/submit-pr HIVE-2883 +``` + +### Regeneration +```bash +# Force regenerate test case +/regenerate-test HIVE-2883 + +# Force regenerate E2E code +/regenerate-e2e HIVE-2883 +``` + +## Prerequisites + +### Required Tools +- JIRA MCP configured and accessible +- GitHub CLI (`gh`) installed and authenticated +- OpenShift cluster kubeconfig available +- Fork of openshift-tests-private configured + +### Environment Setup +Ensure your environment meets the requirements listed in each command's documentation. + +## Output Structure + +All artifacts are generated in `.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/`: + +``` +.claude/test_artifacts/hive/HIVE-2883/ +├── phases/ +│ ├── test_requirements_output.md +│ ├── test_strategy.md +│ └── comprehensive_test_results.md +├── test_cases/ +│ └── HIVE-2883_test_cases.md +├── test_coverage_matrix.md +└── test_report.md +``` + +E2E code is generated in `.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/openshift-tests-private/`: + +``` +.claude/test_artifacts/hive/HIVE-2883/openshift-tests-private/ +└── test/extended/hive/ + └── hive_2883_test.go +``` + +## Command Details + +Each command is documented with: +- **Name**: Command identifier +- **Synopsis**: Usage syntax +- **Description**: What the command does +- **Implementation**: Which agent it executes +- **Return Value**: Success/failure outputs +- **Examples**: Usage examples +- **Arguments**: Required parameters +- **Prerequisites**: Dependencies +- **See Also**: Related commands + +Use `/command-name` to access detailed documentation for each command. + +## Workflow Orchestrator + +The workflow orchestrator (`config/agents/workflow_orchestrator.md`) manages: +- Trigger keyword matching +- Prerequisite validation +- Agent execution sequencing +- Error handling and recovery + +## Related Documentation + +- **Agent Configurations**: `.claude/config/agents/` +- **Rules**: `.claude/config/rules/` +- **Templates**: `.claude/config/templates/` +- **Workflow Guide**: `WORKFLOW_ORCHESTRATOR_GUIDE.md` +- **Startup Checklist**: `STARTUP_CHECKLIST.md` diff --git a/.claude/commands/full-workflow.md b/.claude/commands/full-workflow.md new file mode 100644 index 00000000000..cf7d36923c4 --- /dev/null +++ b/.claude/commands/full-workflow.md @@ -0,0 +1,156 @@ +--- +name: workflow_orchestrator +description: Automatically orchestrate multi-agent workflows based on user intent +tools: Read, Write, Edit, Bash, Grep, Glob, gh, jira-mcp-snowflake MCP, DeepWiki-MCP +argument-hint: [JIRA_KEY] +--- + +## Name +full-workflow + +## Synopsis +``` +/full-workflow JIRA_KEY +``` + +## Description +The `full-workflow` command executes the complete test generation workflow, automating all steps from test case generation to test execution. + +This command combines three agents in sequence: +1. Test case generation +2. E2E test code generation +3. Test execution and reporting + +## When invoked with a JIRA issue key (e.g., HIVE-2883): + +## Implementation + +You are an OpenShift QE Workflow Manager focused on multi-agent workflow orchestration and dependency management. + +### STEP 1: Parse User Request +- **PARSE:** Extract JIRA issue key from user request using regex pattern `(HIVE|CCO|SPLAT)-\d+` +- **PARSE:** Detect REGENERATE mode by checking for keywords: 're-create', 'recreate', 're-generate', 'regenerate' +- **PARSE:** Convert user request to lowercase for keyword matching + +### STEP 2: Identify Workflow +- **MATCH:** Iterate through all workflows and check trigger_keywords against user request +- **MATCH:** Select the workflow with the most specific keyword match +- **MATCH:** If no workflow matches, default to single agent execution based on context +- **LOG:** Output identified workflow name and description + +### STEP 3: Prerequisite Validation +- **VALIDATE:** For each agent in the workflow sequence: + - IF REGENERATE mode AND skip_on_regenerate=true: SKIP prerequisite check + - ELSE IF prerequisite exists: Check if prerequisite_check file/directory exists + - IF prerequisite missing AND NOT regenerate mode: STOP and report missing prerequisite + - LOG: Prerequisite validation results for each agent + +### STEP 4: Execute Workflow +- **EXECUTE:** For each agent in workflow.agents sequence: + - **STEP 4.1:** Read agent markdown config from agent.config path + - **STEP 4.2:** Log agent execution start: '✅ Executing agent: {agent.name}' + - **STEP 4.3:** Execute ALL steps defined in the agent's markdown config + - **STEP 4.4:** Verify agent completion by checking output files + - **STEP 4.5:** Log agent completion: '✅ Agent {agent.name} completed' + - IF agent fails: STOP workflow and proceed to error handling + +### STEP 5: Workflow Completion and Reporting +- **REPORT:** Generate workflow execution summary +- **REPORT:** List all executed agents and their status +- **REPORT:** List all generated output files and their locations +- **REPORT:** Calculate total execution time +- **LOG:** '✅ Workflow {workflow.name} completed successfully' + +### STEP 6: Error Handling +- **ERROR:** If any agent fails during execution: + - LOG: Error details including agent name, step number, and error message + - STOP: Halt workflow execution immediately + - REPORT: Generate partial workflow execution report + - SUGGEST: Provide recovery suggestions based on error type + +## Workflow Definitions + +### Complete Test Generation and Execution +**Trigger Keywords:** +- "generate test cases and run" +- "complete flow for" +- "full test generation" +- "create and run tests" + +**Agents:** +1. **test_case_generation** - Generate test cases - E2E or Manual or both +2. **e2e_test_generation_openshift_private** - Generate E2E test code +3. **test-executor** - Execute E2E tests +4. **test_report_generation** - Generate comprehensive test report + +## Performance Requirements +- Parse request: 1 second +- Identify workflow: 1 second +- Validate prerequisites: 2-5 seconds +- Execute agents: Depends on agents +- Generate report: 2 seconds +- Total target time: 3-4 minutes for complete workflow + +## Critical Requirements +- **MANDATORY: Execute actual tests** - No simulated execution +- **MANDATORY: Validate prerequisites** - Check dependencies before execution +- **MANDATORY: Handle REGENERATE mode** - Skip checks when appropriate +- **FORBIDDEN: Skip error analysis** - Must analyze all failures +- **STOP on agent failure** - Halt workflow immediately on errors + +## Error Handling +- **IF workflow identification fails:** Default to single agent execution +- **IF prerequisite validation fails:** Report missing dependencies and stop +- **IF agent execution fails:** Stop workflow and provide recovery suggestions +- **IF regenerate mode detected:** Skip prerequisite checks for appropriate agents + +## Examples + +1. **Basic usage**: + ``` + /full-workflow HIVE-2883 + ``` + +2. **For different component**: + ``` + /full-workflow CCO-1234 + ``` + +## Arguments +- **$1** (required): JIRA issue key (e.g., HIVE-2883, CCO-1234) + +## Prerequisites +- JIRA MCP configured and accessible +- GitHub CLI (`gh`) installed and authenticated +- OpenShift cluster kubeconfig available (for test execution) +- Fork of openshift-tests-private configured + +## Output Structure +``` +.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/ +├── phases/ +│ ├── test_requirements_output.md +│ ├── test_strategy.md +│ └── test_case_design.md +├── test_cases/ +│ └── {JIRA_KEY}_test_cases.md +├── test_coverage_matrix.md +├── test_report.md +└── workflow_execution_report.md +``` + +## Complete Workflow +This IS the complete workflow. For step-by-step control: +1. `/generate-test-case JIRA_KEY` +2. `/generate-e2e-case JIRA_KEY` +3. `/run-tests JIRA_KEY` +4. `/submit-pr JIRA_KEY` + +## See Also +- `/generate-test-case` - Manual test case generation +- `/generate-e2e-case` - Manual E2E generation +- `/run-tests` - Manual test execution + +--- + +Execute complete test generation workflow for: **{args}** diff --git a/.claude/commands/generate-e2e-case.md b/.claude/commands/generate-e2e-case.md new file mode 100644 index 00000000000..7da48ffab36 --- /dev/null +++ b/.claude/commands/generate-e2e-case.md @@ -0,0 +1,171 @@ +--- +name: e2e_test_generation_openshift_private +description: Orchestrate complete E2E test generation workflow with validation, setup, code generation, and quality checks for openshift-tests-private +tools: Read, Write, Edit, Bash, Glob, Grep, mcp_deepwiki_ask_question +argument-hint: [JIRA_KEY] +--- + +## Name +generate-e2e-case + +## Synopsis +``` +/generate-e2e-case JIRA_KEY +``` + +## Description +The `generate-e2e-case` command generates E2E test code based on test cases and integrates it into the openshift-tests-private repository. + +This is the second step in the test generation workflow. + +## Implementation + +You are an OpenShift QE E2E test generation orchestrator focused on coordinating the complete test generation workflow. + +### Code Generation Guidelines +**MANDATORY:** All E2E test code generation MUST strictly follow the guidelines in: +- `.claude/config/rules/e2e_rules/e2e_test_case_guidelines_test_private.md` + +This guideline file contains: +- ✅ Critical rules (NEVER create new test files, ALWAYS use existing platform files) +- ✅ Platform file mapping (AWS→hive_aws.go, Azure→hive_azure.go, etc.) +- ✅ Test naming conventions and RFC 1123 compliance +- ✅ Pattern learning from existing tests (MANDATORY before writing) +- ✅ Correct vs. incorrect code patterns +- ✅ Managed DNS setup procedures +- ✅ kubectl/oc command syntax rules + +### STEP 1: Background Repository Setup and Branch Creation +- **CHECK:** Verify if `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/openshift-tests-private` exists +- **IF exists AND contains .git:** Repository ready, proceed to branch setup +- **IF exists BUT incomplete:** Verify repository integrity or re-clone +- **IF not exists:** Execute repository setup NOW +- **BRANCH SETUP (MANDATORY):** + - Navigate to repository: `cd .claude/test_artifacts/{COMPONENT}/{jira_issue_key}/openshift-tests-private` + - Ensure on master branch: `git checkout master` + - Pull latest changes: `git pull origin master` + - Create/recreate E2E branch: `git checkout -B {jira_issue_key}-e2e` + - Branch naming format: `{JIRA_KEY}-e2e` (e.g., `HIVE-2923-e2e`) +- **VERIFY:** Repository ready with clean working directory and on correct branch before proceeding + +### STEP 2: Validation Phase +- **VALIDATE:** Check existing E2E test coverage and gaps +- **VERIFY:** Prerequisite test case file exists at `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/test_cases/{jira_issue_key}_e2e_test_case.md` +- **CRITICAL:** Test case file MUST exist before proceeding to code generation +- **OUTPUT:** Validation results written to phases directory + +### STEP 3: Platform Detection (MANDATORY) +- **MANDATORY:** Read test case file: `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/test_cases/{jira_issue_key}_e2e_test_case.md` +- **MANDATORY:** Scan test case names for platform keywords (AWS, Azure, GCP, VSphere, etc.) +- **MANDATORY:** Extract ALL unique platforms from test case content +- **MANDATORY:** Log detected platforms: `Detected platforms: [platform1, platform2, ...]` +- **VERIFY:** Platform list is complete before code generation + +### STEP 4: Parallel Code Generation for ALL Platforms (MANDATORY) +- **CRITICAL:** Execute code generation for EACH detected platform +- **CRITICAL:** Use separate tool calls in SAME message for parallel execution +- **CRITICAL:** Each call MUST include platform parameter (platform=AWS, platform=Azure, etc.) +- **CRITICAL:** DO NOT stop after one platform - ALL platforms must be processed +- **MANDATORY:** Follow ALL rules in `e2e_test_case_guidelines_test_private.md`: + - Learn from 3-5 existing tests before writing (use grep to analyze patterns) + - Use existing platform files (hive_aws.go, hive_azure.go, etc.) + - Use createCD() function (don't manually create imageSet and ClusterDeployment) + - Follow RFC 1123 naming conventions + - Include proper cleanup with defer statements + - For DNS tests: extract hiveutil, use enableManagedDNS(), get DNSZone name dynamically +- **PARALLEL EXECUTION FORMAT:** + ``` + - Platform 1: AWS + - Platform 2: Azure + - Platform 3: GCP + (All in same message for true parallel execution) + ``` +- **VERIFY:** ALL platforms processed before proceeding to quality check + +### STEP 5: Quality Validation +- **VERIFY:** Generated code compiles successfully +- **VERIFY:** Code meets quality standards and follows guidelines in `e2e_test_case_guidelines_test_private.md`: + - ✅ No new test files created (code added to existing platform files) + - ✅ Test naming follows RFC 1123 and convention + - ✅ Uses createCD() function (not manual imageSet/CD creation) + - ✅ Includes cleanup with defer statements + - ✅ No hardcoded values (especially DNS zone names) + - ✅ Correct kubectl/oc command syntax (no `-A` with specific resource names) +- **IF compilation errors:** Iterate fixes until successful +- **OUTPUT:** Quality check results to phases directory + +### STEP 6: Workflow Completion Report +- **SUMMARIZE:** All completed phases and their outputs +- **LIST:** Generated test files by platform +- **CONFIRM:** Test code ready for execution +- **REPORT:** Any issues or warnings encountered + +## Performance Requirements +- Repository setup: < 10 seconds (if needed) +- Validation: < 10 seconds +- Platform detection: < 5 seconds +- Code generation per platform: 2 minutes (parallel execution) +- Quality check: 2 minutes +- Total target time: 5-7 minutes for complete workflow + +## Critical Requirements +- **NEVER skip platform detection** - Must scan test cases +- **NEVER process only one platform** - All platforms must be generated +- **NEVER execute platforms sequentially** - Use parallel execution +- **VERIFY each step completion** before proceeding to next step +- **STOP immediately** if test case file prerequisite missing +- **REGENERATE code** if quality checks fail until successful + +## Error Handling +- **IF repository setup fails:** Report error and stop +- **IF test case file missing:** Report prerequisite missing and stop +- **IF no platforms detected:** Report error and request manual verification +- **IF code generation fails:** Report which platform failed and error details +- **IF quality check fails:** Iterate fixes automatically until success + +## Examples + +1. **Basic usage**: + ``` + /generate-e2e-case HIVE-2883 + ``` + +2. **For different component**: + ``` + /generate-e2e-case CCO-1234 + ``` + +## Arguments +- **$1** (required): JIRA issue key (e.g., HIVE-2883, CCO-1234) + +## Prerequisites +- Test case must exist (run `/generate-test-case` first) +- GitHub CLI (`gh`) installed and authenticated +- Fork of openshift-tests-private configured +- **E2E guidelines file must exist:** `.claude/config/rules/e2e_rules/e2e_test_case_guidelines_test_private.md` + +## Prerequisite Check +- Checks if test case exists in `.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/test_cases/` +- If missing, prompts to run `/generate-test-case` first + +## Output Location +``` +.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/openshift-tests-private/ +└── test/extended/{component}/ + └── {jira_key}_test.go +``` + +Branch: `ai-case-design-{JIRA_KEY}` + +## Regenerate Mode +Use `/regenerate-e2e` to skip prerequisite checks and force regeneration. + +## See Also +- `/generate-test-case` - Previous step +- `/run-tests` - Next step: Execute tests +- `/regenerate-e2e` - Force regeneration +- `.claude/config/rules/e2e_rules/e2e_test_case_guidelines_test_private.md` - **E2E code generation guidelines (MANDATORY)** + +--- + +Execute E2E test generation for: **{args}** diff --git a/.claude/commands/generate-report.md b/.claude/commands/generate-report.md new file mode 100644 index 00000000000..d646c404fc0 --- /dev/null +++ b/.claude/commands/generate-report.md @@ -0,0 +1,159 @@ +--- +name: test_report_generation +description: Generate comprehensive test reports from existing test artifacts, execution results, and coverage matrix +tools: Read, Write +argument-hint: [JIRA_KEY] +--- + +## Name +generate-report + +## Synopsis +``` +/generate-report JIRA_KEY +``` + +## Description +The `generate-report` command generates a comprehensive test report that consolidates all test artifacts including test cases, coverage matrix, and execution results. + +This is an optional step that can be executed after test execution. + +## When invoked with a JIRA issue key (e.g., HIVE-2923): + +## Implementation + +You are an OpenShift QE test reporting specialist focused on creating comprehensive test reports. + +### STEP 1: Load Report Template +- **READ:** `.claude/config/templates/test_report_template.md` +- **PARSE:** Template structure and required sections +- **IDENTIFY:** Placeholder fields to populate +- **OUTPUT:** Template structure internalized (no file output) + +### STEP 2: Gather All Test Artifacts (Parallel Execution) +- **MANDATORY PARALLEL READS:** Execute in single message with multiple Read calls: + - **Read 1:** `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/phases/test_requirements_output.md` (or .yaml) + - **Read 2:** `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/test_coverage_matrix.md` + - **Read 3:** `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/test_cases/{jira_issue_key}_e2e_test_case.md` (if exists) + - **Read 4:** `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/test_cases/{jira_issue_key}_manual_test_case.md` (if exists) + - **Read 5:** `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/test_execution_results/comprehensive_test_results.md` (if exists) +- **HANDLE:** Missing files gracefully (mark as "Not Available" in report) +- **OUTPUT:** Collected artifact data ready for report population + +### STEP 3: Generate Comprehensive Report +- **POPULATE:** Template with all collected data +- **INCLUDE SECTIONS:** + - **Section 1: Test Overview** + - JIRA issue summary + - Test generation date + - Artifacts overview (list all generated files) + - **Section 2: Test Requirements** + - From test_requirements_output.md + - Root cause analysis + - Test scope and objectives + - **Section 3: Test Coverage Matrix** + - From test_coverage_matrix.md + - Coverage statistics + - Test scenarios breakdown + - **Section 4: E2E Test Cases** + - From e2e_test_case.md (if exists) + - Automated test scenarios + - Platform coverage + - **Section 5: Manual Test Cases** + - From manual_test_case.md (if exists) + - Manual test scenarios + - Platform-specific tests + - **Section 6: Execution Results** + - From comprehensive_test_results.md (if exists) + - Test execution summary + - Pass/fail statistics + - **Section 7: Product Bugs Identified** + - Extract from execution results + - Bug severity and impact + - Recommended actions + - **Section 8: E2E Test Issues Identified** + - Extract from execution results + - Test code/config issues + - Fixes applied + - **Section 9: Risk Assessment** + - Based on coverage and execution results + - Identified gaps or risks + - Mitigation recommendations + - **Section 10: Defect Statistics** + - Total bugs found (product vs. E2E) + - Pass rate and coverage metrics + - Quality score +- **OUTPUT FILE:** `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/test_report.md` + +### STEP 4: Report Validation +- **VERIFY:** All required sections populated +- **VERIFY:** Report follows template structure +- **VERIFY:** File size within limits (< 20KB) +- **VERIFY:** All artifact references correct +- **OUTPUT:** Validation status and report location + +## Performance Requirements +- Template loading: < 2 seconds +- Artifact gathering (parallel): < 10 seconds +- Report generation: < 30 seconds +- Validation: < 3 seconds +- Total target time: < 45 seconds + +## Critical Requirements +- **FOLLOW template structure exactly** - All sections in order +- **INCLUDE all available artifacts** - Don't skip any found files +- **SEPARATE product bugs from E2E bugs** - Clear classification +- **PROVIDE actionable statistics** - Must be useful for decision-making +- **HANDLE missing files gracefully** - Mark as "Not Available" rather than failing +- **MAXIMUM report size: 20KB** - Keep concise and focused + +## Error Handling +- **IF template not found:** Use default structure from memory +- **IF no artifacts found:** Create minimal report with available data +- **IF file read fails:** Mark section as "Not Available" and continue +- **IF report exceeds size limit:** Summarize sections to reduce size + +## Examples + +1. **Basic usage**: + ``` + /generate-report HIVE-2883 + ``` + +2. **For different component**: + ``` + /generate-report CCO-1234 + ``` + +## Arguments +- **$1** (required): JIRA issue key (e.g., HIVE-2883, CCO-1234) + +## Prerequisites +This command requires existing test artifacts from previous workflow steps: +- test_requirements_output.md (from test_case_generation) +- test_coverage_matrix.md (from test_case_generation) +- E2E/Manual test cases (from test_case_generation) +- comprehensive_test_results.md (from test-executor, optional) + +## Output Structure +``` +.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/ +└── test_report.md +``` + +## Report Contents +The generated report includes: +- JIRA issue summary +- Test requirements +- Test strategy +- Test cases (E2E and Manual) +- Test coverage matrix +- Test execution results (if available) + +## See Also +- `/run-tests` - Previous step +- Report template: `.claude/config/templates/test_report_template.md` + +--- + +Execute test report generation for: **{args}** diff --git a/.claude/commands/generate-test-case.md b/.claude/commands/generate-test-case.md new file mode 100644 index 00000000000..65b36f14982 --- /dev/null +++ b/.claude/commands/generate-test-case.md @@ -0,0 +1,168 @@ +--- +name: test_case_generation +description: Generate comprehensive test cases for JIRA issues in OpenShift QE, including E2E and manual test cases with coverage matrix. +tools: Read, Edit, Bash, Grep, Glob, gh, jira-mcp-snowflake MCP, DeepWiki-MCP +argument-hint: [JIRA_KEY] +--- + +## Name +generate-test-case + +## Synopsis +``` +/generate-test-case JIRA_KEY +``` + +## Description +The `generate-test-case` command generates comprehensive test cases from a JIRA issue, including test requirements analysis, test strategy, and detailed test case design. + +This is the first step in the test generation workflow. + +## When invoked with a JIRA issue key: +1. Gather JIRA data, extract component, then search related PRs (sequential execution) +2. Analyze requirements and generate test requirements document +3. Execute thinking framework and create test strategy with coverage matrix +4. Generate test case files + +## Implementation + +You are an OpenShift QE specialist focusing on test requirements analysis and test case generation. + +### STEP 1: Sequential Resource Gathering (JIRA then PR Search) +- **MANDATORY:** TOOL CALL 1 (JIRA): Use jira-mcp-snowflake MCP to get JIRA issue data for {jira_issue_key}, If MCP fails, use WebFetch for JIRA data +- **MANDATORY:** Get component name from JIRA data (from components field) +- **MANDATORY:** TOOL CALL 2 (PR): Use gh CLI to search PRs: `gh pr list --search "{jira_issue_key}" --repo openshift/{component} --state all --json url,title,number` +- **NOTE:** Step 2 depends on component from Step 1, so these must execute sequentially +- **FALLBACK:** +- **ANALYZE (if PRs found):** Use gh CLI for detailed PR analysis: `gh pr view {PR_URL}` and `gh pr diff {PR_URL}` to examine commits and file modifications +- **OUTPUT:** JIRA data + PR links + detailed PR analysis (if available); proceed to analysis even if PR search fails + +### STEP 2: Generate Analysis Output +- **WAIT:** Ensure Step 1 (sequential resource gathering) completes before proceeding +- **COMBINE:** JIRA issue data + all PR change details (if any) +- **ANALYZE:** root cause, test_requirements, technical scope, affected platforms, test scenarios +- **OUTPUT:** Generate `test_requirements_output.md` to `.claude/test_artifacts/{component}/{jira_issue_key}/phases/test_requirements_output.md` +- **CRITICAL:** File MUST contain ONLY the fields defined in Output Files section (component_name, card_summary, test_requirements, affected_platforms, test_scenarios, edge_cases) +- **FORBIDDEN:** Adding any sections, headers, or content NOT explicitly listed in the Content field definition + +### STEP 3: Load Rules & Execute Thinking Framework +- **MANDATORY:** Load all test generation rules from `.claude/config/rules/test_case_rules/` +- **MANDATORY:** Execute thinking framework internally following 4 phases (DO NOT output thinking process to user - execute internally only): + - Phase 1: Ecosystem Contextualization + - Phase 2: User Reality Immersion + - Phase 3: Systemic Integration Analysis (includes architecture understanding) + - Phase 4: End-to-End Value Delivery Validation +- **OUTPUT:** Generate `test_strategy.md` to `.claude/test_artifacts/{component}/{jira_issue_key}/phases/test_strategy.md` +- **CRITICAL:** test_strategy.md MUST contain ONLY: test_coverage_matrix, test_scenarios, validation_methods +- **FORBIDDEN:** Adding any sections NOT in Content field (no Executive Summary, Thinking Framework Application, etc.) +- **OUTPUT:** Generate `test_coverage_matrix.md` to `.claude/test_artifacts/{component}/{jira_issue_key}/test_coverage_matrix.md` +- **MANDATORY:** MUST follow EXACT template structure from `.claude/config/templates/test_coverage_matrix_template.md` +- **FORBIDDEN:** Adding any sections, headers, or columns NOT in the template +- **MANDATORY:** Use ONLY the table format defined in template (Scenario headers + table rows) +- **VERIFICATION:** Before writing `test_coverage_matrix.md`, verify each row's Test Type column against classification rules + +### STEP 4: Generate Final Test Cases +- **MANDATORY:** Combine `test_requirements_output.md` and `test_strategy.md` +- **MANDATORY:** Load template file `.claude/config/templates/test_cases_template.md` +- **MANDATORY:** Check if component-specific classification rules exist AND have content: + - Replace `{component}` with actual component name (e.g., "hive" → `test_case_generation_rules_hive.md`) + - Check file exists: `test -f .claude/config/rules/test_case_rules/test_case_generation_rules_{component}.md` + - Check file has content: `test -s .claude/config/rules/test_case_rules/test_case_generation_rules_{component}.md` (file size > 0) + - **DECISION BASED ON RESULT:** + - **If file EXISTS AND has content (file size > 0):** Separate test cases by type - create TWO files: + - **FILE 1:** `{jira_issue_key}_e2e_test_case.md` - Contains ONLY E2E test cases + - **FILE 2:** `{jira_issue_key}_manual_test_case.md` - Contains ONLY Manual test cases + - **If file NOT FOUND OR empty (file size = 0):** Create ONE file: + - **FILE:** `{jira_issue_key}_test_cases.md` - Contains ALL test cases (no E2E/Manual separation) +- **OUTPUT:** Generate files to `.claude/test_artifacts/{jira_issue_key}/test_cases/` +- **NOTE:** If all test cases are one type (when rules exist), only generate the corresponding file + +## Performance Requirements +- Execute independent operations in parallel where possible (rules loading, template reading) +- JIRA fetch and PR search must be sequential (PR search depends on component from JIRA) +- Target completion: 60-90 seconds +- Use concise outputs: `✅ Step X completed: ` + +## Rules and Guidelines +- All test generation rules: `.claude/config/rules/test_case_rules/unified_test_generation_rules.md` +- Component-specific rules: `.claude/config/rules/test_case_rules/test_case_generation_rules_{component}.md` + - Contains classification rules (E2E vs Manual), platform requirements, and component-specific patterns +- All rules are loaded and applied during execution - refer to rule files for complete details + +## Output Files + +**CRITICAL PATH RULE**: All output files MUST be created in `.claude/test_artifacts/` directory + +**Output Location**: `.claude/test_artifacts/{component}/{jira_issue_key}/` + +**Path Format**: Always use relative paths from project root: `.claude/test_artifacts/{component}/{jira_issue_key}/...` + +### Generated Files + +1. **test_requirements_output.md** + - Path: `.claude/test_artifacts/{component}/{jira_issue_key}/phases/test_requirements_output.md` + - Format: Markdown + - Description: Test requirements analysis + - **MANDATORY Content ONLY:** component_name, card_summary, test_requirements, affected_platforms, test_scenarios, edge_cases + - **FORBIDDEN:** Any additional sections (Root Cause Analysis, Technical Scope, Integration Points, etc.) + +2. **test_strategy.md** + - Path: `.claude/test_artifacts/{component}/{jira_issue_key}/phases/test_strategy.md` + - Format: Markdown + - Description: Test strategy and coverage + - **MANDATORY Content ONLY:** test_coverage_matrix, test_scenarios, validation_methods + - **FORBIDDEN:** Any additional sections (Executive Summary, Philosophy, Thinking Framework, Risk Assessment, etc.) + +3. **test_coverage_matrix.md** + - Path: `.claude/test_artifacts/{component}/{jira_issue_key}/test_coverage_matrix.md` + - Format: Markdown + - Description: Test coverage matrix + - Template: `.claude/config/templates/test_coverage_matrix_template.md` + - **CRITICAL:** MUST use EXACT template structure - ONLY scenario headers and table rows + - **FORBIDDEN:** Adding summary sections, statistics, or any content NOT in template + +4. **{jira_issue_key}_e2e_test_case.md** + - Path: `.claude/test_artifacts/{component}/{jira_issue_key}/test_cases/{jira_issue_key}_e2e_test_case.md` + - Format: Markdown + - Description: E2E test cases only (automated executable test cases) + - Template: `.claude/config/templates/test_cases_template.yaml` + - **CRITICAL:** MUST follow EXACT template structure and variable placeholders + - **FORBIDDEN:** Adding any sections NOT defined in template + +5. **{jira_issue_key}_manual_test_case.md** + - Path: `.claude/test_artifacts/{component}/{jira_issue_key}/test_cases/{jira_issue_key}_manual_test_case.md` + - Format: Markdown + - Description: Manual test cases only (require manual setup or validation) + - Template: `.claude/config/templates/test_cases_template.yaml` + - **CRITICAL:** MUST follow EXACT template structure and variable placeholders + - **FORBIDDEN:** Adding any sections NOT defined in template + +## Examples + +1. **Basic usage**: + ``` + /generate-test-case HIVE-2883 + ``` + +2. **For different component**: + ``` + /generate-test-case CCO-1234 + ``` + +## Arguments +- **$1** (required): JIRA issue key (e.g., HIVE-2883, CCO-1234) + +## Prerequisites +- JIRA MCP configured and accessible +- Component rules exist in `.claude/config/rules/test_case_rules/` + +## Regenerate Mode +Use `/regenerate-test` to skip prerequisite checks and force regeneration. + +## See Also +- `/generate-e2e-case` - Next step: E2E code generation +- `/regenerate-test` - Force regeneration + +--- + +Execute test case generation for: **{args}** diff --git a/.claude/commands/regenerate-e2e.md b/.claude/commands/regenerate-e2e.md new file mode 100644 index 00000000000..93dd543aff8 --- /dev/null +++ b/.claude/commands/regenerate-e2e.md @@ -0,0 +1,182 @@ +--- +name: e2e_test_generation_openshift_private +description: Orchestrate complete E2E test generation workflow with validation, setup, code generation, and quality checks for openshift-tests-private +tools: Read, Write, Edit, Bash, Glob, Grep, mcp_deepwiki_ask_question +argument-hint: [JIRA_KEY] +regenerate-mode: true +--- + +## Name +regenerate-e2e + +## Synopsis +``` +/regenerate-e2e JIRA_KEY +``` + +## Description +The `regenerate-e2e` command regenerates E2E test code, **skipping all prerequisite checks** and **forcing regeneration** even if E2E code already exists. + +This is the regeneration variant of `/generate-e2e-case`. + +## When invoked with a JIRA issue key (e.g., HIVE-2883): + +## Implementation + +You are an OpenShift QE E2E test generation orchestrator focused on coordinating the complete test generation workflow. + +**REGENERATE MODE ENABLED** - All prerequisite checks are skipped and existing files will be overwritten. + +### Code Generation Guidelines +**MANDATORY:** All E2E test code generation MUST strictly follow the guidelines in: +- `.claude/config/rules/e2e_rules/e2e_test_case_guidelines_test_private.md` + +This guideline file contains: +- ✅ Critical rules (NEVER create new test files, ALWAYS use existing platform files) +- ✅ Platform file mapping (AWS→hive_aws.go, Azure→hive_azure.go, etc.) +- ✅ Test naming conventions and RFC 1123 compliance +- ✅ Pattern learning from existing tests (MANDATORY before writing) +- ✅ Correct vs. incorrect code patterns +- ✅ Managed DNS setup procedures +- ✅ kubectl/oc command syntax rules + +### STEP 1: Background Repository Setup and Branch Creation +- **CHECK:** Verify if `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/openshift-tests-private` exists +- **IF exists AND contains .git:** Repository ready, proceed to branch setup +- **IF exists BUT incomplete:** Verify repository integrity or re-clone +- **IF not exists:** Execute repository setup NOW +- **BRANCH SETUP (MANDATORY):** + - Navigate to repository: `cd .claude/test_artifacts/{COMPONENT}/{jira_issue_key}/openshift-tests-private` + - Ensure on master branch: `git checkout master` + - Pull latest changes: `git pull origin master` + - Create/recreate E2E branch: `git checkout -B {jira_issue_key}-e2e` + - Branch naming format: `{JIRA_KEY}-e2e` (e.g., `HIVE-2923-e2e`) +- **VERIFY:** Repository ready with clean working directory and on correct branch before proceeding + +### STEP 2: Validation Phase (SKIPPED IN REGENERATE MODE) +- **SKIP:** All prerequisite checks +- **FORCE:** Proceed to platform detection regardless of existing files + +### STEP 3: Platform Detection (MANDATORY) +- **MANDATORY:** Read test case file: `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/test_cases/{jira_issue_key}_e2e_test_case.md` +- **MANDATORY:** Scan test case names for platform keywords (AWS, Azure, GCP, VSphere, etc.) +- **MANDATORY:** Extract ALL unique platforms from test case content +- **MANDATORY:** Log detected platforms: `Detected platforms: [platform1, platform2, ...]` +- **VERIFY:** Platform list is complete before code generation + +### STEP 4: Parallel Code Generation for ALL Platforms (MANDATORY) + +**BEFORE CODE GENERATION - MANDATORY PREPARATION:** +1. **READ E2E GUIDELINES:** `.claude/config/rules/e2e_rules/e2e_test_case_guidelines_test_private.md` +2. **IDENTIFY TARGET FILE:** Use existing platform file, NEVER create new test file + - AWS → `test/extended/cluster_operator/hive/hive_aws.go` + - Azure → `test/extended/cluster_operator/hive/hive_azure.go` + - GCP → `test/extended/cluster_operator/hive/hive_gcp.go` + - vSphere → `test/extended/cluster_operator/hive/hive_vsphere.go` +3. **LEARN FROM EXISTING TESTS:** Read 3-5 existing tests in target file to understand patterns +4. **EXTRACT PATTERNS:** Variable naming, test structure, validation methods, cleanup patterns + +**CODE GENERATION EXECUTION:** +- **CRITICAL:** Execute code generation for EACH detected platform +- **CRITICAL:** Use separate tool calls in SAME message for parallel execution +- **CRITICAL:** Each call MUST include platform parameter (platform=AWS, platform=Azure, etc.) +- **CRITICAL:** Each call MUST reference the guidelines file path +- **CRITICAL:** Each call MUST specify existing platform file to APPEND to (NOT create new file) +- **CRITICAL:** DO NOT stop after one platform - ALL platforms must be processed +- **PARALLEL EXECUTION FORMAT:** + ``` + - Platform 1: AWS → Append to hive_aws.go + - Platform 2: Azure → Append to hive_azure.go + - Platform 3: GCP → Append to hive_gcp.go + (All in same message for true parallel execution) + ``` +- **VERIFY:** ALL platforms processed before proceeding to quality check + +### STEP 5: Quality Validation +- **VERIFY:** Generated code compiles successfully +- **VERIFY:** Code meets quality standards and follows guidelines in `e2e_test_case_guidelines_test_private.md`: + - ✅ No new test files created (code added to existing platform files) + - ✅ Test naming follows RFC 1123 and NonHyperShiftHOST convention + - ✅ Uses createCD() function (not manual imageSet/CD creation) + - ✅ Includes cleanup with defer statements + - ✅ No hardcoded values (especially DNS zone names) + - ✅ Correct kubectl/oc command syntax (no `-A` with specific resource names) + - ✅ Uses wait.PollUntilContextTimeout (not deprecated wait.Poll) +- **IF compilation errors:** Iterate fixes until successful +- **OUTPUT:** Quality check results to phases directory + +### STEP 6: Workflow Completion Report +- **SUMMARIZE:** All completed phases and their outputs +- **LIST:** Generated test files by platform +- **CONFIRM:** Test code ready for execution +- **REPORT:** Any issues or warnings encountered + +## Performance Requirements +- Repository setup: < 10 seconds (if needed) +- Validation: < 10 seconds +- Platform detection: < 5 seconds +- Code generation per platform: 2 minutes (parallel execution) +- Quality check: 2 minutes +- Total target time: 5-7 minutes for complete workflow + +## Critical Requirements +- **NEVER skip platform detection** - Must scan test cases +- **NEVER process only one platform** - All platforms must be generated +- **NEVER execute platforms sequentially** - Use parallel execution +- **VERIFY each step completion** before proceeding to next step +- **FORCE regeneration** - Overwrite existing files without confirmation +- **REGENERATE code** if quality checks fail until successful + +## Error Handling +- **IF repository setup fails:** Report error and stop +- **IF test case file missing:** Report prerequisite missing and stop +- **IF no platforms detected:** Report error and request manual verification +- **IF code generation fails:** Report which platform failed and error details +- **IF quality check fails:** Iterate fixes automatically until success + +## Examples + +1. **Basic usage**: + ``` + /regenerate-e2e HIVE-2883 + ``` + +2. **For different component**: + ``` + /regenerate-e2e CCO-1234 + ``` + +## Arguments +- **$1** (required): JIRA issue key (e.g., HIVE-2883, CCO-1234) + +## When to Use +- Fix issues in existing E2E code +- Update E2E test based on new requirements +- Regenerate after test case updates +- Force complete regeneration + +## Behavior Difference +| Aspect | `/generate-e2e-case` | `/regenerate-e2e` | +|--------|---------------------|------------------| +| Prerequisite check | ✅ Yes | ❌ No (skip) | +| Overwrite existing | ❌ No (skip if exists) | ✅ Yes (force) | +| Use case | First time generation | Update/fix existing | + +## Output Location +``` +.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/openshift-tests-private/ +└── test/extended/cluster_operator/hive/ + └── hive_{platform}.go (test appended to existing file) +``` + +**IMPORTANT:** Tests are APPENDED to existing platform files, NOT created as new files. + +Branch: `ai-case-design-{JIRA_KEY}` (recreated if needed) + +## See Also +- `/generate-e2e-case` - Normal E2E code generation +- `/regenerate-test` - Regenerate test cases + +--- + +**REGENERATE MODE**: Skip prerequisite checks and force regeneration for: **{args}** diff --git a/.claude/commands/regenerate-test.md b/.claude/commands/regenerate-test.md new file mode 100644 index 00000000000..beb60d73450 --- /dev/null +++ b/.claude/commands/regenerate-test.md @@ -0,0 +1,177 @@ +--- +name: test_case_generation +description: Generate comprehensive test cases for JIRA issues in OpenShift QE, including E2E and manual test cases with coverage matrix. +tools: Read, Edit, Bash, Grep, Glob, gh, jira-mcp-snowflake MCP, DeepWiki-MCP +argument-hint: [JIRA_KEY] +regenerate-mode: true +--- + +## Name +regenerate-test + +## Synopsis +``` +/regenerate-test JIRA_KEY +``` + +## Description +The `regenerate-test` command regenerates test cases from a JIRA issue, **skipping all prerequisite checks** and **forcing regeneration** even if test cases already exist. + +This is the regeneration variant of `/generate-test-case`. + +## When invoked with a JIRA issue key: +1. Gather JIRA data, extract component, then search related PRs (sequential execution) +2. Analyze requirements and generate test requirements document +3. Execute thinking framework and create test strategy with coverage matrix +4. Generate test case files + +## Implementation + +You are an OpenShift QE specialist focusing on test requirements analysis and test case generation. + +**REGENERATE MODE ENABLED** - All prerequisite checks are skipped and existing files will be overwritten. + +### STEP 1: Sequential Resource Gathering (JIRA then PR Search) +- **MANDATORY:** TOOL CALL 1 (JIRA): Use jira-mcp-snowflake MCP to get JIRA issue data for {jira_issue_key}, If MCP fails, use WebFetch for JIRA data +- **MANDATORY:** Get component name from JIRA data (from components field) +- **MANDATORY:** TOOL CALL 2 (PR): Use gh CLI to search PRs: `gh pr list --search "{jira_issue_key}" --repo openshift/{component} --state all --json url,title,number` +- **NOTE:** Step 2 depends on component from Step 1, so these must execute sequentially +- **FALLBACK:** +- **ANALYZE (if PRs found):** Use gh CLI for detailed PR analysis: `gh pr view {PR_URL}` and `gh pr diff {PR_URL}` to examine commits and file modifications +- **OUTPUT:** JIRA data + PR links + detailed PR analysis (if available); proceed to analysis even if PR search fails + +### STEP 2: Generate Analysis Output +- **WAIT:** Ensure Step 1 (sequential resource gathering) completes before proceeding +- **COMBINE:** JIRA issue data + all PR change details (if any) +- **ANALYZE:** root cause, test_requirements, technical scope, affected platforms, test scenarios +- **OUTPUT:** Generate `test_requirements_output.md` to `.claude/test_artifacts/{component}/{jira_issue_key}/phases/test_requirements_output.md` +- **CRITICAL:** File MUST contain ONLY the fields defined in Output Files section (component_name, card_summary, test_requirements, affected_platforms, test_scenarios, edge_cases) +- **FORBIDDEN:** Adding any sections, headers, or content NOT explicitly listed in the Content field definition + +### STEP 3: Load Rules & Execute Thinking Framework +- **MANDATORY:** Load all test generation rules from `.claude/config/rules/test_case_rules/` +- **MANDATORY:** Execute thinking framework internally following 4 phases (DO NOT output thinking process to user - execute internally only): + - Phase 1: Ecosystem Contextualization + - Phase 2: User Reality Immersion + - Phase 3: Systemic Integration Analysis (includes architecture understanding) + - Phase 4: End-to-End Value Delivery Validation +- **OUTPUT:** Generate `test_strategy.md` to `.claude/test_artifacts/{component}/{jira_issue_key}/phases/test_strategy.md` +- **CRITICAL:** test_strategy.md MUST contain ONLY: test_coverage_matrix, test_scenarios, validation_methods +- **FORBIDDEN:** Adding any sections NOT in Content field (no Executive Summary, Thinking Framework Application, etc.) +- **OUTPUT:** Generate `test_coverage_matrix.md` to `.claude/test_artifacts/{component}/{jira_issue_key}/test_coverage_matrix.md` +- **MANDATORY:** MUST follow EXACT template structure from `.claude/config/templates/test_coverage_matrix_template.md` +- **FORBIDDEN:** Adding any sections, headers, or columns NOT in the template +- **MANDATORY:** Use ONLY the table format defined in template (Scenario headers + table rows) +- **VERIFICATION:** Before writing `test_coverage_matrix.md`, verify each row's Test Type column against classification rules + +### STEP 4: Generate Final Test Cases +- **MANDATORY:** Combine `test_requirements_output.md` and `test_strategy.md` +- **MANDATORY:** Load template file `.claude/config/templates/test_cases_template.md` +- **MANDATORY:** Check if component-specific classification rules exist AND have content: + - Replace `{component}` with actual component name (e.g., "hive" → `test_case_generation_rules_hive.md`) + - Check file exists: `test -f .claude/config/rules/test_case_rules/test_case_generation_rules_{component}.md` + - Check file has content: `test -s .claude/config/rules/test_case_rules/test_case_generation_rules_{component}.md` (file size > 0) + - **DECISION BASED ON RESULT:** + - **If file EXISTS AND has content (file size > 0):** Separate test cases by type - create TWO files: + - **FILE 1:** `{jira_issue_key}_e2e_test_case.md` - Contains ONLY E2E test cases + - **FILE 2:** `{jira_issue_key}_manual_test_case.md` - Contains ONLY Manual test cases + - **If file NOT FOUND OR empty (file size = 0):** Create ONE file: + - **FILE:** `{jira_issue_key}_test_cases.md` - Contains ALL test cases (no E2E/Manual separation) +- **OUTPUT:** Generate files to `.claude/test_artifacts/{jira_issue_key}/test_cases/` +- **NOTE:** If all test cases are one type (when rules exist), only generate the corresponding file + +## Performance Requirements +- Execute independent operations in parallel where possible (rules loading, template reading) +- JIRA fetch and PR search must be sequential (PR search depends on component from JIRA) +- Target completion: 60-90 seconds +- Use concise outputs: `✅ Step X completed: ` + +## Rules and Guidelines +- All test generation rules: `.claude/config/rules/test_case_rules/unified_test_generation_rules.md` +- Component-specific rules: `.claude/config/rules/test_case_rules/test_case_generation_rules_{component}.md` + - Contains classification rules (E2E vs Manual), platform requirements, and component-specific patterns +- All rules are loaded and applied during execution - refer to rule files for complete details + +## Output Files + +**CRITICAL PATH RULE**: All output files MUST be created in `.claude/test_artifacts/` directory + +**Output Location**: `.claude/test_artifacts/{component}/{jira_issue_key}/` + +**Path Format**: Always use relative paths from project root: `.claude/test_artifacts/{component}/{jira_issue_key}/...` + +### Generated Files + +1. **test_requirements_output.md** + - Path: `.claude/test_artifacts/{component}/{jira_issue_key}/phases/test_requirements_output.md` + - Format: Markdown + - Description: Test requirements analysis + - **MANDATORY Content ONLY:** component_name, card_summary, test_requirements, affected_platforms, test_scenarios, edge_cases + - **FORBIDDEN:** Any additional sections (Root Cause Analysis, Technical Scope, Integration Points, etc.) + +2. **test_strategy.md** + - Path: `.claude/test_artifacts/{component}/{jira_issue_key}/phases/test_strategy.md` + - Format: Markdown + - Description: Test strategy and coverage + - **MANDATORY Content ONLY:** test_coverage_matrix, test_scenarios, validation_methods + - **FORBIDDEN:** Any additional sections (Executive Summary, Philosophy, Thinking Framework, Risk Assessment, etc.) + +3. **test_coverage_matrix.md** + - Path: `.claude/test_artifacts/{component}/{jira_issue_key}/test_coverage_matrix.md` + - Format: Markdown + - Description: Test coverage matrix + - Template: `.claude/config/templates/test_coverage_matrix_template.md` + - **CRITICAL:** MUST use EXACT template structure - ONLY scenario headers and table rows + - **FORBIDDEN:** Adding summary sections, statistics, or any content NOT in template + +4. **{jira_issue_key}_e2e_test_case.md** + - Path: `.claude/test_artifacts/{component}/{jira_issue_key}/test_cases/{jira_issue_key}_e2e_test_case.md` + - Format: Markdown + - Description: E2E test cases only (automated executable test cases) + - Template: `.claude/config/templates/test_cases_template.yaml` + - **CRITICAL:** MUST follow EXACT template structure and variable placeholders + - **FORBIDDEN:** Adding any sections NOT defined in template + +5. **{jira_issue_key}_manual_test_case.md** + - Path: `.claude/test_artifacts/{component}/{jira_issue_key}/test_cases/{jira_issue_key}_manual_test_case.md` + - Format: Markdown + - Description: Manual test cases only (require manual setup or validation) + - Template: `.claude/config/templates/test_cases_template.yaml` + - **CRITICAL:** MUST follow EXACT template structure and variable placeholders + - **FORBIDDEN:** Adding any sections NOT defined in template + +## Examples + +1. **Basic usage**: + ``` + /regenerate-test HIVE-2883 + ``` + +2. **For different component**: + ``` + /regenerate-test CCO-1234 + ``` + +## Arguments +- **$1** (required): JIRA issue key (e.g., HIVE-2883, CCO-1234) + +## When to Use +- Fix issues in existing test case +- Update test case based on new requirements +- Regenerate after JIRA issue updates +- Force complete regeneration + +## Behavior Difference +| Aspect | `/generate-test-case` | `/regenerate-test` | +|--------|----------------------|-------------------| +| Prerequisite check | ✅ Yes | ❌ No (skip) | +| Overwrite existing | ❌ No (skip if exists) | ✅ Yes (force) | +| Use case | First time generation | Update/fix existing | + +## See Also +- `/generate-test-case` - Normal test case generation +- `/regenerate-e2e` - Regenerate E2E code + +--- + +**REGENERATE MODE**: Skip prerequisite checks and force regeneration for: **{args}** diff --git a/.claude/commands/run-tests.md b/.claude/commands/run-tests.md new file mode 100644 index 00000000000..f7b503c3819 --- /dev/null +++ b/.claude/commands/run-tests.md @@ -0,0 +1,169 @@ +--- +name: test-executor +description: Execute E2E tests in openshift-tests-private, analyze results, perform auto-fixes if needed, and update coverage matrix +tools: Read, Write, Edit, Bash, mcp_deepwiki_ask_question +argument-hint: [JIRA_KEY] +--- + +## Name +run-tests + +## Synopsis +``` +/run-tests JIRA_KEY +``` + +## Description +The `run-tests` command executes E2E tests for a JIRA issue and generates comprehensive test execution reports. + +This is the third step in the test generation workflow. + +## When invoked with a JIRA issue key (e.g., HIVE-2883): + +## Implementation + +You are an OpenShift QE test execution specialist focused on comprehensive E2E test execution with intelligent failure analysis and auto-fix. + +### STEP 1: Environment Validation (Parallel Checks) +- **PARALLEL EXECUTION:** Execute multiple validation commands in single message: + - **Check 1:** Load and verify KUBECONFIG environment variable + - **Check 2:** Verify cluster availability: `oc cluster-info` + - **Check 3:** Verify oc version: `oc version` + - **Check 4:** Verify cluster permissions: `oc whoami` +- **VERIFY:** Required tools available (kubectl, oc, extended-platform-tests) +- **OUTPUT:** Environment validation status with cluster details + +### STEP 2: Test Environment Preparation +- **NAVIGATE:** Change directory to `.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/openshift-tests-private` +- **BUILD:** Execute `make all` to compile test binaries +- **CREATE:** Output directory: `mkdir -p ../test_execution_results` +- **VERIFY:** Build completed successfully and binaries exist +- **OUTPUT:** Build status and binary location + +### STEP 3: E2E Test Execution +- **EXECUTE:** Test command (sequential execution required): + ```bash + ./bin/extended-platform-tests run all --dry-run | grep "{JIRA_KEY}" | ./bin/extended-platform-tests run --timeout 100m -f - --output-file ../test_execution_results/{JIRA_KEY}_test_execution_log.txt + ``` +- **WAIT:** Allow test execution to complete (timeout: 100 minutes) +- **CAPTURE:** Real exit code using `echo $?` +- **VERIFY:** Log file exists: `../test_execution_results/{JIRA_KEY}_test_execution_log.txt` +- **READ:** Test execution log to extract results +- **OUTPUT:** Test execution status and log file location + +### STEP 4: Test Result Analysis + Auto-Fix (Conditional) +- **MANDATORY:** Read and analyze test execution log completely +- **EXTRACT:** All error messages, stack traces, failure points, assertion failures +- **CLASSIFY:** Errors hierarchically: + - **E2E code issues:** Syntax errors, undefined functions, incorrect selectors + - **E2E config issues:** Wrong parameters, missing resources, timeout issues + - **Product bugs:** Actual product defects requiring bug reports +- **IF E2E code/config issues detected:** + - **USE:** DeepWiki MCP to analyze correct command syntax and patterns + - **APPLY:** Code/config corrections to fix issues + - **RE-EXECUTE:** Tests with fixes applied (Step 3 again) + - **DOCUMENT:** Auto-fix attempts and results + - **MAXIMUM:** 2 auto-fix iterations +- **IF product bugs detected:** + - **DOCUMENT:** Bug details for reporting + - **MARK:** Tests as failed due to product defect +- **OUTPUT:** Analysis results with classifications and fix attempts + +### STEP 5: Update Test Coverage Matrix +- **READ:** `../test_coverage_matrix.md` +- **PARSE:** Execution log to extract results per Test Case ID +- **UPDATE:** Status column for each test case: + - ✅ **Passed** - Test executed successfully + - ❌ **Failed** - Test executed but failed (product bug) + - ⚠️ **Skipped** - Test not executed + - 🔄 **Retried** - Failed initially, passed after auto-fix + - 🐛 **Product Bug** - Failed due to product defect +- **CALCULATE:** Coverage metrics: + - Total test scenarios + - Executed scenarios + - Skipped scenarios + - Pass rate percentage + - Execution coverage percentage +- **WRITE:** Updated matrix with coverage summary section +- **OUTPUT FILE:** `../test_coverage_matrix.md` (updated) + +### STEP 6: Generate Comprehensive Test Report +- **CREATE:** Detailed test execution report +- **OUTPUT FILE:** `../test_execution_results/{JIRA_KEY}_comprehensive_test_results.md` +- **INCLUDE:** + - **Section 1:** Test Execution Summary (total, passed, failed, skipped) + - **Section 2:** Detailed Results by Test Case + - **Section 3:** Auto-Fix Attempts and Results (if any) + - **Section 4:** Product Bugs Identified (if any) + - **Section 5:** E2E Test Issues Identified (if any) + - **Section 6:** Coverage Matrix Update Summary + - **Section 7:** Recommendations for Next Steps + +## Performance Requirements +- Environment validation: < 10 seconds +- Test environment preparation: < 90 seconds +- E2E test execution: 10-90 minutes (depends on test complexity) +- Result analysis: < 5 minutes +- Auto-fix iteration: < 15 minutes (if needed) +- Coverage matrix update: < 2 minutes +- Report generation: < 3 minutes +- Total target time: 20-120 minutes (depends on test execution time) + +## Critical Requirements +- **MANDATORY: Execute actual tests** - No simulated execution +- **MANDATORY: Capture real exit codes** - Must verify actual test results +- **MANDATORY: Auto-fix workflow** - Must attempt fixes for E2E issues +- **FORBIDDEN: Skip error analysis** - Must analyze all failures +- **MAXIMUM 2 auto-fix iterations** - Prevent infinite loops +- **CLASSIFY errors correctly** - E2E issues vs. product bugs +- **UPDATE coverage matrix** - Must reflect actual execution results + +## Error Handling +- **IF environment validation fails:** Report cluster connectivity issues and stop +- **IF build fails:** Report compilation errors and stop +- **IF test execution fails to start:** Check binary and permissions +- **IF auto-fix fails after 2 iterations:** Document and proceed to reporting +- **IF coverage matrix not found:** Create new matrix from test results + +## Examples + +1. **Basic usage**: + ``` + /run-tests HIVE-2883 + ``` + +2. **For different component**: + ``` + /run-tests CCO-1234 + ``` + +## Arguments +- **$1** (required): JIRA issue key (e.g., HIVE-2883, CCO-1234) + +## Prerequisites +- E2E test code must exist (run `/generate-e2e-case` first) +- Test environment must be configured +- OpenShift cluster kubeconfig accessible + +## Output Structure +``` +.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/phases/ +└── comprehensive_test_results.md +``` + +## Test Execution Details +- Tests run against OpenShift cluster +- Results include: + - Test pass/fail status + - Execution logs + - Error diagnostics (if failed) + - Performance metrics + +## See Also +- `/generate-e2e-case` - Previous step +- `/generate-report` - Generate comprehensive report +- `/submit-pr` - Next step: Submit PR + +--- + +Execute test execution for: **{args}** diff --git a/.claude/commands/submit-pr.md b/.claude/commands/submit-pr.md new file mode 100644 index 00000000000..2f93f1783a9 --- /dev/null +++ b/.claude/commands/submit-pr.md @@ -0,0 +1,160 @@ +--- +name: pr-submitter +description: Create and submit pull requests for generated E2E tests to openshift-tests-private repository following official template +tools: Read, Write, Bash +argument-hint: [JIRA_KEY] +--- + +## Name +submit-pr + +## Synopsis +``` +/submit-pr JIRA_KEY +``` + +## Description +The `submit-pr` command creates a pull request for the E2E test code to the openshift-tests-private repository. + +This is the final step in the test generation workflow. + +## When invoked with a JIRA issue key (e.g., HIVE-2883): + +## Implementation + +You are an OpenShift QE pull request specialist focused on submitting E2E test code following official repository standards. + +### STEP 1: Load Templates and Rules (Parallel Execution) +- **MANDATORY PARALLEL READS:** Execute in single message with multiple Read calls: + - **Read 1:** `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/openshift-tests-private/.github/pull_request_template.md` + - **Read 2:** `.claude/config/rules/pr_submission_rules.md` +- **PARSE:** Template structure and identify required sections +- **IDENTIFY:** Mandatory fields and formatting requirements +- **INTERNALIZE:** All submission guidelines and rules +- **OUTPUT:** Template structure understanding (no file output) + +### STEP 2: Gather PR Content Data +- **READ:** Test execution log: `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/test_execution_results/{jira_issue_key}_test_execution_log.txt` +- **EXTRACT:** Last 100-200 lines showing: + - Test execution command used + - Test results summary (passed/failed/skipped) + - Any important test output or errors +- **IDENTIFY:** CI profile from test environment (AWS, Azure, GCP, etc.) +- **PREPARE:** PR description from JIRA issue summary and test objectives +- **OUTPUT:** Collected data for PR body construction + +### STEP 3: Format PR Body Following Template +- **CONSTRUCT:** PR body with exact template structure: + - **Section 1: PR Description** + - Brief test case description from JIRA + - What feature/functionality is being tested + - **Section 2: Local Test Logs** + - Wrap in `
` tags for proper formatting
+    - Include test execution command
+    - Include test results output (last 100-200 lines)
+  - **Section 3: Jenkins Job Link**
+    - Write 'N/A - tested locally' (default for AI-generated tests)
+  - **Section 4: CI Profile**
+    - Specify platform type used for testing (AWS/Azure/GCP/etc.)
+- **VERIFY:** All required sections populated
+- **OUTPUT:** Formatted PR body ready for submission
+
+### STEP 4: Create Pull Request
+- **NAVIGATE:** Change directory to `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/openshift-tests-private`
+- **EXECUTE:** GitHub CLI command to create PR:
+  ```bash
+  gh pr create --title "[{COMPONENT}] Add E2E tests for {jira_issue_key}" --body ""
+  ```
+- **INCLUDE:** `/hold` status in PR body to prevent auto-merge
+- **CAPTURE:** PR URL from command output
+- **VERIFY:** PR created successfully
+- **OUTPUT:** PR URL and creation status
+
+### STEP 5: Generate PR Submission Report
+- **DOCUMENT:** PR creation details
+- **OUTPUT FILE:** `.claude/test_artifacts/{COMPONENT}/{jira_issue_key}/{jira_issue_key}_pr_submission_report.md`
+- **INCLUDE:**
+  - **Section 1:** PR Submission Summary
+    - PR URL
+    - PR title
+    - Creation timestamp
+    - Submission status (success/failure)
+  - **Section 2:** PR Content Overview
+    - Test scenarios included
+    - Platforms tested
+    - CI profile used
+  - **Section 3:** Test Execution Summary
+    - Test results summary
+    - Pass/fail status
+  - **Section 4:** Next Steps
+    - Actions required (remove /hold, request reviews, etc.)
+    - Links to PR and related JIRA issue
+
+## Performance Requirements
+- Template and rules loading: < 5 seconds
+- Content gathering: < 10 seconds
+- PR body formatting: < 5 seconds
+- PR creation: < 30 seconds
+- Report generation: < 10 seconds
+- Total target time: < 60 seconds
+
+## Critical Requirements
+- **MANDATORY: Follow repository template exactly** - All sections required
+- **MANDATORY: Include test execution logs** - Proper format with 
 tags
+- **MANDATORY: Apply /hold status** - Prevent auto-merge
+- **VERIFY all required sections** - Template compliance check
+- **CAPTURE PR URL** - Must return to user
+- **USE GitHub CLI** - Automated PR creation
+
+## Error Handling
+- **IF template file not found:** Use default template structure from rules
+- **IF test execution log not found:** Create PR with note about missing logs
+- **IF gh pr create fails:** Report authentication or permission issues
+- **IF repository not found:** Report prerequisite missing and stop
+
+## Examples
+
+1. **Basic usage**:
+   ```
+   /submit-pr HIVE-2883
+   ```
+
+2. **For different component**:
+   ```
+   /submit-pr CCO-1234
+   ```
+
+## Arguments
+- **$1** (required): JIRA issue key (e.g., HIVE-2883, CCO-1234)
+
+## Prerequisites
+- E2E test code must exist and be validated
+- GitHub CLI (`gh`) installed and authenticated
+- Git fork must be configured
+- Branch `ai-case-design-{JIRA_KEY}` must exist
+- PR submission rules must be followed
+
+## PR Format
+**Commit Message**:
+```
+Add automated test for {JIRA_KEY}
+
+Generated E2E test cases for {JIRA_TITLE}
+
+JIRA: {JIRA_URL}
+```
+
+**PR Title**: `Add automated test for {JIRA_KEY}`
+
+**PR Body**: Includes JIRA details, test description, and validation checklist
+
+## Output
+- PR URL: https://github.com/openshift/openshift-tests-private/pull/{PR_NUMBER}
+
+## See Also
+- `/run-tests` - Previous step
+- PR submission rules: `.claude/config/rules/pr_submission_rules.md`
+
+---
+
+Execute PR submission for: **{args}**
diff --git a/.claude/config/rules/e2e_rules/e2e_test_case_guidelines.md b/.claude/config/rules/e2e_rules/e2e_test_case_guidelines.md
new file mode 100644
index 00000000000..c65958db5fe
--- /dev/null
+++ b/.claude/config/rules/e2e_rules/e2e_test_case_guidelines.md
@@ -0,0 +1,5 @@
+# E2E Test Case Addition Guidelines for Hive
+
+## 📋 Overview & Summary  
+This guideline standardizes the addition and management of Hive E2E test cases, ensuring consistent structure, clear classification, and maintainable and executable tests.
+
diff --git a/.claude/config/rules/e2e_rules/e2e_test_case_guidelines_test_private.md b/.claude/config/rules/e2e_rules/e2e_test_case_guidelines_test_private.md
new file mode 100644
index 00000000000..7de78dd364f
--- /dev/null
+++ b/.claude/config/rules/e2e_rules/e2e_test_case_guidelines_test_private.md
@@ -0,0 +1,242 @@
+# E2E Test Case Guidelines for Hive (Simplified)
+
+## 🚨 MANDATORY RULES (MANDATORY - Must Follow ALL)
+
+1. **NEVER create new test files** - Use existing platform files (hive_aws.go, hive_azure.go, etc.)
+2. **ALWAYS learn from existing tests** - Analyze 3-5 existing tests before writing
+3. **Use createCD() function** - Don't manually create imageSet and ClusterDeployment separately
+4. **Follow RFC 1123** - Convert JIRA keys: `HIVE-2544` → `testCaseID := "hive2544"`
+5. **Include proper cleanup** - Use defer statements for all resources
+6. **For DNS tests** - Extract hiveutil, use enableManagedDNS(), and get DNSZone name dynamically (NEVER hardcode)
+
+---
+
+## 📋 Platform File Mapping
+
+| Platform | File |
+|----------|------|
+| AWS | `hive_aws.go` |
+| Azure | `hive_azure.go` |
+| GCP | `hive_gcp.go` |
+| vSphere | `hive_vsphere.go` |
+
+---
+
+## 🔍 Test Naming Convention
+
+```go
+g.It("NonHyperShiftHOST-Longduration-NonPreRelease-ConnectedOnly-Author:{AUTHOR}-Medium-{JIRA_KEY}-{Description} [Serial]", func() {
+```
+
+**Example:**
+```go
+g.It("NonHyperShiftHOST-NonPreRelease-Author:mihuang-Medium-HIVE-2544-Machinepool in Azure non-AZ regions [Serial]", func() {
+```
+
+---
+
+## 🧠 Pattern Learning (MANDATORY)
+
+### Before Writing Tests:
+```bash
+# Analyze existing tests in target platform file
+grep -A 50 "g\.It.*NonHyperShiftHOST" hive_aws.go | head -200
+```
+
+### Extract These Patterns:
+1. **Variable naming**: `testCaseID`, `poolName`, `cdName`
+2. **Step organization**: `exutil.By("Step description")`
+3. **Validation**: `newCheck("expect", "get", asAdmin, ...)`
+4. **Cleanup**: `defer cleanupObjects(...)`
+
+---
+
+## 📋 Test Structure Template
+
+### ClusterDeployment Test
+```go
+testCaseID := "hiveXXXX"  // RFC 1123: lowercase, no symbols
+cdName := "cluster-" + testCaseID + "-" + getRandomString()[:ClusterSuffixLen]
+
+exutil.By("Creating ClusterDeployment")
+// Use createCD() or platform template
+
+exutil.By("Waiting for cluster installation")
+// Use newCheck() for validation
+
+exutil.By("Testing functionality")
+// Test-specific logic
+
+exutil.By("Cleanup")
+defer cleanupObjects(...)
+```
+
+### MachinePool Test
+```go
+testCaseID := "hiveXXXX"
+cdName := "cluster-" + testCaseID + "-" + getRandomString()[:ClusterSuffixLen]
+
+exutil.By("Creating ClusterDeployment")
+// Create CD first
+
+exutil.By("Creating MachinePool")
+// Use machinepool template
+
+exutil.By("Validating MachinePool")
+// Wait for ready status
+
+defer cleanupObjects(...)
+```
+
+---
+
+## 🌐 Platform-Specific Checks
+
+### Azure Government Cloud
+```go
+if !isGovCloud {
+    g.Skip("Test requires Azure Government Cloud, current: " + region)
+}
+```
+
+### AWS Government Cloud
+```go
+if !isGovCloud {
+    g.Skip("Test requires AWS Government Cloud, current: " + region)
+}
+```
+
+---
+
+## 🔍 Managed DNS Setup (MANDATORY for manageDNS tests)
+
+**MANDATORY Reference**: Test case 24088
+
+### ✅ CORRECT Pattern
+```go
+hiveutilPath := extractHiveutil(oc, tmpDir)
+e2e.Logf("hiveutil extracted to %v", hiveutilPath)
+
+// Step 2: Enable managed DNS
+exutil.By("Enable managed DNS using hiveutil")
+enableManagedDNS(oc, hiveutilPath)
+
+
+//MANDATORY: Get DNSZone name dynamically - NEVER hardcode
+dnsZoneName, _, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("dnszone", "-n", oc.Namespace(), "-o=jsonpath={.items[0].metadata.name}").Outputs()
+o.Expect(err).NotTo(o.HaveOccurred())
+e2e.Logf("DNSZone created: %s", dnsZoneName)
+```
+
+### ❌ WRONG Pattern (Don't Do This)
+```go
+// ❌ Don't manually create imageSet and cluster separately
+imageSet := clusterImageSet{...}
+imageSet.create(oc)
+cluster := clusterDeployment{...}
+cluster.create(oc)
+
+// ❌ Don't hardcode DNSZone name
+dnsZoneName := cdName + "-zone"  // WRONG!
+```
+
+---
+
+## 🚨 kubectl/oc Command Rules
+
+### ❌ FORBIDDEN (Invalid Syntax)
+```bash
+# NEVER use -A with specific resource names (will ALWAYS fail)
+oc get dnszone my-zone -A -o=jsonpath={...}
+oc get clusterdeployment my-cd -A -o=jsonpath={...}
+```
+
+### ✅ CORRECT Syntax
+```bash
+# Option 1: Specific namespace + specific name
+oc get dnszone my-zone -n  -o=jsonpath={...}
+
+# Option 2: All namespaces WITHOUT specific name
+oc get dnszone -A -o=jsonpath={.items[*].metadata.name}
+```
+
+---
+
+## 🚫 Forbidden Actions
+
+- ❌ Creating new test files
+- ❌ Modifying `hive_util.go` without coordination
+- ❌ Skipping platform detection
+- ❌ Missing cleanup operations
+- ❌ Using `-A` flag with specific resource names
+- ❌ Creating inline YAML for resources
+- ❌ Redundant logging (use `exutil.By()` only for key steps)
+
+---
+
+## 🔧 Template Strategy Decision
+
+**Check CRD field compatibility before modifying templates:**
+
+```bash
+# Check if new field is optional
+oc get crd clusterdeployments.hive.openshift.io -o yaml | grep -A 5 newField
+```
+
+**Decision Matrix:**
+
+| Field Type | Action |
+|------------|--------|
+| Optional + omitempty | Merge to existing template or use JSON patch |
+| Required field | Create new template |
+| Backward compatible | Merge to existing template |
+
+---
+
+## 📝 MachinePool Non-AZ Region Testing
+
+**For regions without availability zones (e.g., usgovtexas):**
+
+```go
+// Expect ONLY ONE MachineSet (not multiple zone-specific ones)
+machineSetNames, err := oc.AsAdmin().WithoutNamespace().Run("get").Args(
+    "--kubeconfig="+kubeconfig, "MachineSet", "-n", "openshift-machine-api",
+    "-l", "hive.openshift.io/machine-pool=worker",
+    "-o=jsonpath={.items[*].metadata.name}").Output()
+
+machineSetNameList := strings.Fields(machineSetNames)
+o.Expect(len(machineSetNameList)).To(o.Equal(1), "Expected 1 MachineSet for non-AZ region")
+```
+
+---
+
+## 🌍 Testing Region Override
+
+```go
+// Override region for specific tests
+spokeRegion := "usgovtexas"     // Azure Gov
+spokeRegion := "eastus"         // Azure Public
+spokeRegion := "us-east-1"      // AWS
+```
+
+---
+
+## ✅ Best Practices Summary
+
+1. **Learn first**: Analyze existing tests before writing
+2. **Follow patterns**: Variable names, validation, cleanup
+3. **Use utilities**: createCD(), enableManagedDNS, newCheck()
+4. **Proper syntax**: `-n ` with specific names
+5. **Clean code**: Use defer, avoid redundant logs
+6. **RFC 1123**: Lowercase JIRA keys, no symbols
+7. **Reference tests**: 24088 (managed DNS), 25447 (MachinePool)
+
+---
+
+## 📚 Key Reference Test Cases
+
+- **24088**: Managed DNS setup pattern
+- **25447**: MachinePool test structure
+- **32223**: ClusterDeployment setup process
+
+Check these tests for detailed implementation examples.
diff --git a/.claude/config/rules/pr_submission_rules.md b/.claude/config/rules/pr_submission_rules.md
new file mode 100644
index 00000000000..cf0a6349187
--- /dev/null
+++ b/.claude/config/rules/pr_submission_rules.md
@@ -0,0 +1,11 @@
+# PR Submission Rules for E2E Test Code
+
+pr_submission_rules:
+  template_requirements:
+    - "Follow .github/pull_request_template.md from openshift-tests-private repository to create a PR"
+  
+  pr_creation:
+    - "Create each PR with hold status"
+  
+  test_logs_location:
+    - "Test logs can be found from: .claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/test_execution_results/{JIRA_KEY}_test_execution_log.txt"
diff --git a/.claude/config/rules/test_case_rules/test_case_generation_rules_hive.md b/.claude/config/rules/test_case_rules/test_case_generation_rules_hive.md
new file mode 100644
index 00000000000..c7a7b258fa8
--- /dev/null
+++ b/.claude/config/rules/test_case_rules/test_case_generation_rules_hive.md
@@ -0,0 +1,175 @@
+# Test Case Generation Rules - Hive Component
+
+## Repository Context
+
+### Working Environment
+- **Current Location**: `.claude/` directory (AI test generation workspace WITHIN Hive repository)
+- **Analysis Target**: Hive product source code at repository root (`../`)
+- **Test Generation Target**: `.claude/test_artifacts/{COMPONENT}/{JIRA_KEY}/openshift-tests-private/` (private E2E test repository)
+
+### Key Distinction
+- **Product Code**: What we ANALYZE - Hive source code in parent directory
+- **Test Code**: What we GENERATE - E2E tests in openshift-tests-private
+- **DeepWiki**: Alternative source for Hive architecture analysis (local code preferred)
+
+---
+
+## E2E vs Manual Test Classification Rules
+
+### Manual Test Scenarios (MANDATORY)
+
+The following scenarios MUST be classified as **Manual Test Cases**:
+
+#### 1. Upgrade/Retrofit Scenarios
+- **Keywords**: "retrofit", "legacy", "upgrade"
+- **Reason**: Requires testing with legacy resources (without feature) → upgrade to new Hive (with feature)
+- **Example**: Testing backward compatibility with existing clusters
+
+#### 2. AWS Shared VPC
+- **Indicator**: Uses AWS `HostedZoneRole` field
+- **Reason**: DNS hosted zone is in a different AWS account
+- **Setup**: Requires manual AWS shared VPC configuration
+
+#### 3. GCP Shared VPC
+- **Indicator**: Uses GCP `NetworkProjectID` field
+- **Reason**: Network resources are in a different GCP project
+- **Setup**: Requires manual GCP shared VPC configuration
+
+#### 4. Azure Resource Group
+- **Indicator**: Uses Azure `ResourceGroupName` field
+- **Reason**: Uses existing/pre-created resource group
+- **Setup**: Requires manual preparation of specific resource group
+
+#### 5. Special Platforms
+- **Platforms**: Nutanix, OpenStack, IBM Cloud
+- **Reason**: Complex manual setup or limited CI support
+- **Setup**: Requires platform-specific manual configuration
+
+### E2E Test Scenarios
+
+All other scenarios should be E2E tests unless they meet the Manual Test criteria above.
+
+---
+
+## Hive-Specific Test Requirements
+
+### DNSZone Testing (MANDATORY)
+
+**CRITICAL**: DNSZone tests MUST follow this pattern:
+
+1. **Create Managed DNS Cluster**: 
+   - Use `hiveutil` to configure `managedDomains` in `HiveConfig`
+   - Create `ClusterDeployment` to trigger DNSZone creation automatically
+   - **DO NOT** create DNSZone resources directly
+
+2. **Reference Implementation**:
+   - See E2E case 24088 for proper setup pattern
+   - Managed DNS ensures proper zone lifecycle management
+
+3. **Why This Matters**:
+   - DNSZones are automatically created by Hive controller
+   - Direct creation bypasses controller logic
+   - Managed domains ensure proper cleanup
+
+### Platform-Specific Considerations
+
+#### AWS Platform
+- **Standard Tests**: Use regular AWS credentials
+- **Shared VPC Tests**: Mark as Manual (requires HostedZoneRole setup)
+- **STS/OIDC**: Can be E2E if using standard test credentials
+
+#### GCP Platform
+- **Standard Tests**: Use regular GCP credentials
+- **Shared VPC Tests**: Mark as Manual (requires NetworkProjectID setup)
+
+#### Azure Platform
+- **Standard Tests**: Use regular Azure credentials
+- **Existing Resource Group**: Mark as Manual (requires ResourceGroupName setup)
+
+#### Other Platforms
+- **VSphere**: E2E (if CI support available)
+- **BareMetal**: E2E (if CI support available)
+- **Nutanix**: Manual (complex setup)
+- **OpenStack**: Manual (complex setup)
+- **IBM Cloud**: Manual (complex setup)
+
+---
+
+## Test Case Design Guidelines
+
+### Cluster Lifecycle Testing
+
+1. **Provision Tests**: Create ClusterDeployment, wait for installation
+2. **Deprovision Tests**: Delete ClusterDeployment, verify cleanup
+3. **Hibernation Tests**: Hibernate/resume cluster, verify state transitions
+4. **Upgrade Tests**: Manual only (see classification rules)
+
+### Resource Testing Patterns
+
+1. **ClusterDeployment**: Main resource for cluster management
+2. **ClusterPool**: Pre-provisioned cluster pool management
+3. **DNSZone**: Only via managed domain setup (see DNSZone requirements)
+4. **SyncSet**: Resource synchronization to managed clusters
+5. **MachinePool**: Additional worker node groups
+
+### Validation Requirements
+
+1. **Resource Creation**: Verify resources exist and are in expected state
+2. **Status Conditions**: Check status conditions indicate success/failure
+3. **Controller Behavior**: Verify controllers act correctly
+4. **Cleanup**: Ensure resources are properly deleted
+
+---
+
+## Hive Module Information
+
+- **Module Name**: `github.com/openshift/hive`
+- **Repository**: `https://github.com/openshift/hive`
+- **Test Repository**: `https://github.com/openshift/openshift-tests-private`
+
+---
+
+## Quick Classification Checklist
+
+Use this checklist when classifying test cases:
+
+- [ ] Does it involve upgrade/retrofit/legacy? → **Manual**
+- [ ] Does it require AWS Shared VPC (HostedZoneRole)? → **Manual**
+- [ ] Does it require GCP Shared VPC (NetworkProjectID)? → **Manual**
+- [ ] Does it require Azure Resource Group (ResourceGroupName)? → **Manual**
+- [ ] Is it for Nutanix/OpenStack/IBM Cloud? → **Manual**
+- [ ] Does it test DNSZone? → Ensure uses managed domain setup
+- [ ] All other scenarios → **E2E**
+
+---
+
+## Examples
+
+### Example 1: Standard AWS ClusterDeployment → E2E
+- Creates cluster on AWS with standard credentials
+- No shared VPC requirements
+- Classification: **E2E**
+
+### Example 2: AWS Shared VPC with HostedZoneRole → Manual
+- Requires DNS zone in different AWS account
+- Uses HostedZoneRole field
+- Classification: **Manual**
+
+### Example 3: DNSZone Management → E2E (with proper setup)
+- Uses managed domain configuration
+- Creates ClusterDeployment to trigger DNSZone
+- Classification: **E2E**
+
+### Example 4: Legacy Cluster Upgrade → Manual
+- Tests retrofitting new feature to old cluster
+- Requires upgrade procedure
+- Classification: **Manual**
+
+---
+
+## Notes
+
+1. **Always check keywords**: "retrofit", "legacy", "upgrade", "shared VPC", "existing resource group"
+2. **DNSZone is special**: Never create directly, always via managed domain
+3. **When in doubt**: If setup is complex or requires manual infrastructure, mark as Manual
+4. **Platform support**: Check if platform has CI/E2E support before marking as E2E
diff --git a/.claude/config/rules/test_case_rules/unified_test_generation_rules.md b/.claude/config/rules/test_case_rules/unified_test_generation_rules.md
new file mode 100644
index 00000000000..c0215806d62
--- /dev/null
+++ b/.claude/config/rules/test_case_rules/unified_test_generation_rules.md
@@ -0,0 +1,249 @@
+# Unified Test Case Generation Framework
+
+**Integrated**: Core Problem → Thinking Phases → Architecture → Validation → Quality
+
+---
+
+## 1. CORE PROBLEM IDENTIFICATION (EXECUTE FIRST)
+
+### PRIMARY RULE
+Identify single technical root cause, design minimal validation
+
+### MANDATORY
+- Multiple platforms = 1 test case with platform note
+- **ORDER**: Core Problem → Thinking Phases → Test Generation
+
+---
+
+## 2. THINKING PHASES (MANDATORY)
+
+### FAST MODE
+Skip Phase 2 if issue has clear root cause + standard validation pattern
+
+### PHASE 1: ECOSYSTEM CONTEXT (MANDATORY)
+
+**Questions:**
+- How do real users encounter this issue?
+- User-facing component or part of larger workflow?
+- What user action triggers affected resources?
+
+**Analysis:**
+- Map JIRA technical components → user-facing operations
+- Distinguish internal components vs user-accessible APIs
+- Identify realistic scenarios exposing bug in production
+
+**Output:** Complete mapping: technical component → user workflow
+
+---
+
+### PHASE 2: USER REALITY (FAST MODE CAN SKIP)
+
+**Questions:**
+- Realistic operational context?
+- User scale and complexity?
+- Environmental constraints and dependencies?
+
+**Analysis:**
+- Target users: engineers, developers, operators, admins
+- Scale: cluster count, resource volume, operation frequency
+- Common patterns, error scenarios, recovery procedures
+
+**Output:** Test scenarios mirroring authentic user environments
+
+---
+
+### PHASE 3: SYSTEM INTEGRATION (MANDATORY)
+
+**Questions:**
+- Component integration with other components?
+- Data flow and propagation mechanisms?
+- Configuration flow: user input → final state?
+
+**Analysis:**
+- All components in user workflow
+- Data/config propagation, dependency chains
+- Failure patterns, error handling
+
+**Architecture Sources:**
+- PR analysis (gh CLI/WebFetch) for code changes
+- DeepWiki for stable knowledge (~1 week lag)
+- Local repo: `grep -r ComponentName ./pkg/ ./cmd/`, `find . -name *controller*`
+
+**Output:** Multi-component interaction understanding
+
+---
+
+### PHASE 4: VALUE DELIVERY (MANDATORY)
+
+**Questions:**
+- How validate complete user workflow?
+- Quantitative measurements for desired outcome?
+
+**Analysis:**
+- Quantitative measurements for workflow completion
+- Numerical thresholds for pass/fail
+- End-to-end value delivery, not just component function
+
+**Output:** Measurable criteria for successful functionality
+
+---
+
+## 3. TECHNICAL VALIDATION (UNIVERSAL)
+
+### Core Questions
+
+| Question | Description |
+|----------|-------------|
+| **What Changed?** | Specific behavior/state/output difference after fix? |
+| **How to Observe?** | Observable artifact proving the change? |
+| **What Threshold?** | Numerical/binary criterion for success vs failure? |
+
+### 5 Validation Types - CHOOSE ONE
+
+| Type | When to Use | Approach |
+|------|-------------|----------|
+| **A: Stability** | changing/inconsistent behavior | measure consistency across time |
+| **B: Frequency** | too frequent/excessive behavior | count occurrences, compare threshold |
+| **C: State** | incorrect state | compare actual vs expected |
+| **D: Absence** | unwanted content | verify not found |
+| **E: Progression** | stuck/not progressing | monitor state transitions |
+
+### Meta-Rules
+- ✅ Quantitative > Qualitative
+- ✅ Observable > Internal
+- ✅ Reproducible + Contextual + Decisive
+
+### Pattern Selection
+- ⛔ **FORBIDDEN**: Multiple patterns for same issue, 3+ test cases for single root cause
+- ✅ **REQUIRED**: ONE most direct pattern for root cause
+
+---
+
+## 4. TEST QUALITY RULES
+
+### ✅ Good Test Characteristics
+- Realistic user workflows, not direct resource manipulation
+- Specific measurable results with numerical thresholds
+- Platform-appropriate CLI tools (oc, kubectl, etc.)
+- Recovery scenarios: error → healthy state
+- End-to-end flows, not isolated components
+
+### ❌ Bad Test Characteristics
+- Direct manipulation of internal/auto-generated resources
+- Isolated component tests without user context
+- Vague results, missing quantitative validation
+- Inappropriate CLI tools, no recovery testing
+
+### Mandatory Elements
+- ✅ Time-based measurements for performance/behavior
+- ✅ Frequency counting for reconciliation/update problems
+- ✅ Log pattern analysis for component behavior
+- ✅ Resource state tracking for consistency
+- ✅ Metadata change tracking (resourceVersion, generation)
+- ✅ Recovery path validation: error state → healthy state
+
+### Simple Validation Checklist
+
+| Check | Question |
+|-------|----------|
+| **1** | Test reproduces JIRA issue through realistic user scenarios? |
+| **2** | User would naturally perform these steps in production? |
+| **3** | Every technical problem has quantitative validation with metrics? |
+
+---
+
+## 5. ARCHITECTURE UNDERSTANDING
+
+### Data Sources
+
+| Source | Use Case | Details |
+|--------|----------|---------|
+| **PR Analysis** | Code changes, new implementations | `gh CLI` / `WebFetch` |
+| **DeepWiki** | Stable knowledge | Platform reqs, architecture (~1 week lag) |
+| **Local Repo** | Latest source code | `grep -r ComponentName ./pkg/ ./cmd/`
`find . -name *controller*` | + +### Analysis Tasks +- Component role in overall system +- End-to-end flow exposing reported issue +- User workflows vs direct resource manipulation +- Component dependencies and integration points +- How users interact: directly or through higher-level operations? +- What triggers resource creation/modification in real scenarios? + +### Repository Context +- **Product Code**: What we ANALYZE - project source +- **Test Code**: Where we WRITE - test repository +- **Check**: `pwd; ls -la | grep -E '(go.mod|Makefile|pkg|cmd|src)'` + +--- + +## 6. TEST COVERAGE & GENERATION + +### Test Coverage Focus +- **Primary**: End-to-end user workflows naturally exercising component +- **Scenarios**: Happy path, edge cases, error scenarios, integration, recovery +- **Commands**: Platform-appropriate CLI, realistic naming, executable in real environments +- **Validation**: Metadata tracking, status analysis, log analysis, event monitoring + +### Test Steps Guidelines +- ✅ **Workflow-Driven**: complete user operations +- ✅ **Realistic Scenarios**: authentic interaction patterns +- ✅ **Measurable Results**: quantifiable expected results +- ✅ **Executable Commands**: exact commands users execute +- ⛔ **AVOID**: internal/auto-generated resource manipulation + +--- + +## 7. FRAMEWORK ENFORCEMENT + +### Mandatory Completion +- FAST MODE active if: clear root cause + standard validation pattern +- Complete phases in sequence before technical analysis +- Each phase produces concrete outputs + +### FAST MODE Details +- **Skip**: Phase 2 (User Reality) +- **Flow**: Phase 1 → Phase 3 +- **Saves**: 15-20 seconds + +### Violation Handling +- Incomplete phase → **STOP**, complete missing phases +- Analysis contradicting framework → **revise** +- Test not reflecting user reality → **redesign** +- Component-isolation ignoring ecosystem → **prohibited** + +--- + +## 8. QUANTITATIVE VALIDATION EXAMPLES + +### Component Behavior +- Measure operation frequency over time window +- Compare metadata before/after (resourceVersion, generation) +- Count log entries for expected patterns +- Time-based status stability monitoring + +### Error Handling +- Content comparison across operations +- Pattern match unwanted content removal +- State transition validation with timeouts +- Resource cleanup with quantitative checks + +### Recovery Validation +- Monitor: error state → healthy state transition +- Verify resource functionality after recovery +- Validate error artifact cleanup +- Confirm normal operation resumption with metrics + +--- + +## 9. PROJECT-SPECIFIC CONFIGURATION + +### Note +Component-specific rules should be maintained in separate files + +### Example +- `test_case_generation_rules_{component}.yaml` for component mandatory rules +- Define project-specific test environment requirements as needed + +### Usage +Load this unified framework + component-specific rules together for complete guidance diff --git a/.claude/config/templates/test_cases_template.md b/.claude/config/templates/test_cases_template.md new file mode 100644 index 00000000000..6cf6ce5467b --- /dev/null +++ b/.claude/config/templates/test_cases_template.md @@ -0,0 +1,42 @@ +# Test Case: {JIRA_KEY} +**Component:** {COMPONENT} +**Summary:** {ISSUE_SUMMARY} + +## Test Overview +- **Total Test Cases:** {NUMBER_OF_TEST_CASES} +- **Test Types:** {TEST_TYPES} +- **Estimated Time:** {ESTIMATED_TIME} + +## Test Cases + +### Test Case {JIRA_KEY}_001 +**Name:** {TEST_CASE_NAME_1} +**Description:** {TEST_CASE_DESCRIPTION_1} +**Type:** {TEST_TYPE_1} +**Priority:** {PRIORITY_1} + +#### Prerequisites +{PREREQUISITES_1} + +#### Test Steps +1. **Action:** {ACTION_1} + **Expected:** {EXPECTED_1} + +2. **Action:** {ACTION_2} + **Expected:** {EXPECTED_2} + +### Test Case {JIRA_KEY}_002 +**Name:** {TEST_CASE_NAME_2} +**Description:** {TEST_CASE_DESCRIPTION_2} +**Type:** {TEST_TYPE_2} +**Priority:** {PRIORITY_2} + +#### Prerequisites +{PREREQUISITES_2} + +#### Test Steps +1. **Action:** {ACTION_1} + **Expected:** {EXPECTED_1} + +2. **Action:** {ACTION_2} + **Expected:** {EXPECTED_2} diff --git a/.claude/config/templates/test_coverage_matrix_template.md b/.claude/config/templates/test_coverage_matrix_template.md new file mode 100644 index 00000000000..6c41d86c21e --- /dev/null +++ b/.claude/config/templates/test_coverage_matrix_template.md @@ -0,0 +1,14 @@ +# {jira_issue_key} Test Coverage Matrix + +## Scenario 1 - Scenario Name +| Platform | Cluster Type(s) | Test Type | Reasoning | Priority | Test Case ID | Status | +|----------|-----------------|------------|-----------------------------|------------|--------------|----------------| +| [AWS] | [cluster_types] | E2E/Manual | [reasoning] | High/Medium | TC-HIVE-XXX | ❌ Not Started | +| [Azure] | [cluster_types] | E2E/Manual | [reasoning] | High/Medium | TC-HIVE-XXX | ❌ Not Started | + + +## Scenario 2 - Scenario Name +| Platform | Cluster Type(s) | Test Type | Reasoning | Priority | Test Case ID | Status | +|----------|-----------------|------------|-----------------------------|------------|--------------|----------------| +| [AWS] | [cluster_types] | E2E/Manual | [reasoning] | High/Medium | TC-HIVE-XXX | ❌ Not Started | +| [Azure] | [cluster_types] | E2E/Manual | [reasoning] | High/Medium | TC-HIVE-XXX | ❌ Not Started | diff --git a/.claude/config/templates/test_report_template.md b/.claude/config/templates/test_report_template.md new file mode 100644 index 00000000000..50cb96e9494 --- /dev/null +++ b/.claude/config/templates/test_report_template.md @@ -0,0 +1,131 @@ +# 🧾 {jira_issue_key} Test Report + +## 1. Basic Information +| Item | Content | +|------|---------| +| **Test Project Name** | [Project Name] | +| **Test Request** | [JIRA Issue Link] | +| **Test Period** | [Start Date] - [End Date] | +| **Test Engineer** | [Tester Name] | + +--- + +## 2. Test Conclusion +### 2.1 Test Pass/Fail Status +⬜ **Pass** - Can be released to production environment +⬜ **Fail** - Needs to be fixed and re-tested +⬜ **Conditional Pass** - Can be released after meeting the following conditions: [List conditions] + +### 2.2 Execution Summary +- **Execution Results**: Total [X] test cases executed, [X] passed, [X] failed, [XX]% pass rate. +- **Key Findings**: + - ✅ Feature A working properly + - ⚠️ Feature B has resource cleanup issues (HIVE-2579) +- **Overall Conclusion**: + > Current version main functionality is usable, but it is recommended to fix high priority defects before entering the release phase. + +### 2.3 Risk Assessment +| Risk Description | Probability | Impact | Mitigation | Owner | Status | +|------------------|-------------|--------|------------|-------|--------| +| [Risk 1] | High/Medium/Low | High/Medium/Low | [Mitigation plan] | [Owner] | Open | +| None | - | - | - | - | - | + +--- + +## 3. Test Objectives & Scope +### 🎯 Test Objectives +Brief description of the main goals of this test, for example: +> Verify if HIVE-2579 fix is effective; verify 4.17 version PrivateLink mode installation and deletion process. + +### 📍 Test Scope +- **Modules Involved**: +- **Features Covered**: +- **Out of Scope**: + +### 📋 Test Artifacts & Execution Results +- **Test Case Files**: + - **E2E Test Cases**: `{jira_issue_key}_e2e_test_case.md` + - **Manual Test Cases**: `{jira_issue_key}_manual_test_case.md` + - **Test Coverage Matrix**: `test_coverage_matrix.md` +- **E2E PR Links**: + - **E2E Test Code**: [PR Link to E2E code] + - **Test Execution Logs**: See `test_execution_results/` directory + +#### Test Execution Details +| Test Case ID | Test Case Name | Hive Version | Platform | Spoke Cluster Version | Status | +|--------------|----------------|--------------|----------|----------------------|--------| +| TC001 | Create Cluster (PrivateLink Mode) | 1.2.1234 | AWS | 4.16 | ✅ Pass | +| TC002 | Delete Cluster (Resource Cleanup) | 1.2.1234 | AWS | 4.16 | ❌ Fail | +| TC003 | Verify MachinePool Status Sync | 1.2.1234 | GCP | 4.15 | ✅ Pass | +| TC004 | Install on Azure PrivateLink | 1.2.1235 | Azure | 4.15 | ✅ Pass | + +#### Test Summary by Platform +| Platform | Total Cases | Executed | Passed | Failed | Pass Rate | +|----------|-------------|----------|--------|--------|-----------| +| AWS | [X] | [X] | [X] | [X] | [XX]% | +| GCP | [X] | [X] | [X] | [X] | [XX]% | +| Azure | [X] | [X] | [X] | [X] | [XX]% | +| **Total** | **[X]** | **[X]** | **[X]** | **[X]** | **[XX]%** | + +--- + +## 4. Defects & Issues Summary + +### 4.1 Product Bugs +| Issue ID | Title | Severity | Platform | Status | Notes | +|----------|-------|----------|----------|--------|-------| +| HIVE-2579 | Cluster deletion resources not fully cleaned | Major | AWS | Open | Reproduced multiple times | +| HIVE-2565 | AWS PrivateLink Cluster installation failed | Blocker | AWS | Fixed | Verification passed | +| None | No product bugs found | - | - | - | - | + +### 4.2 E2E Test Bugs +| Issue ID | Title | Severity | Platform | Status | Notes | +|----------|-------|----------|----------|--------|-------| +| [E2E-001] | [E2E test case issue description] | [Severity] | [Platform] | [Status] | [Notes] | +| None | No E2E test bugs found | - | - | - | - | + +### 4.3 Defect Statistics by Platform +| Platform | Bug Type | Severity | Total | Open | Fixed | Verification Passed | +|----------|----------|----------|-------|------|-------|---------------------| +| AWS | Product | Blocker | [X] | [X] | [X] | [X] | +| AWS | Product | Major | [X] | [X] | [X] | [X] | +| GCP | Product | Major | [X] | [X] | [X] | [X] | +| Azure | E2E Test | Minor | [X] | [X] | [X] | [X] | + +--- + +## 5. Appendices +- 📂 **Logs & Screenshots**: + - Hive controller pod logs: `/tmp/hive-controller.log` + - Provision pod logs: `/tmp/provision.log` +- 🔗 **Related Links**: + - [PR #2344](https://github.com/openshift/hive/pull/2344) + - [JIRA HIVE-2579](https://issues.redhat.com/browse/HIVE-2579) + +--- + +## Usage Instructions + +### Quick Fill Guide +1. Replace all `[placeholders]` with actual content +2. Check the appropriate ⬜ options +3. Fill in specific numbers, dates, and versions +4. Add relevant links and log paths +5. Customize based on actual test scenario + +### Test Environment Configuration +- **Single Environment**: Keep only one row in "Test Environments" table +- **Multiple Environments**: Add rows for each tested environment (ENV-1, ENV-2, etc.) +- **Environment ID**: Use consistent IDs (ENV-1, ENV-2) throughout the report + +### Key Data Sources +- **Test Results**: Extract from `test_execution_results/*_comprehensive_test_results.md` +- **Spoke Cluster Version**: Extract from test execution log files in `test_execution_results/` +- **Test Cases**: Extract from `test_cases/` +- **JIRA Data**: Use JIRA issue details for context and requirements + +### Report Principles +- Concise and focused on key findings +- Clear status indicators for quick assessment +- Actionable conclusions and recommendations +- Easy to read and understand diff --git a/.gitignore b/.gitignore index 32a79679f04..83c71aeea59 100644 --- a/.gitignore +++ b/.gitignore @@ -34,3 +34,7 @@ config/apiserver/certificates/** *.swo *~ /_output/ + +# AI related files +.claude/settings.local.json +.claude/test_artifacts/ \ No newline at end of file