Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion docs/src/content/docs/reference/frontmatter-full.md
Original file line number Diff line number Diff line change
Expand Up @@ -5620,10 +5620,14 @@ safe-outputs:

# Option 2: undefined

# Array of extra job steps to run after detection
# Array of extra job steps to run before engine execution
# (optional)
steps: []

# Array of extra job steps to run after engine execution
# (optional)
post-steps: []

# Runner specification for the detection job. Overrides agent.runs-on for the
# detection job only. Defaults to agent.runs-on.
# (optional)
Expand Down
55 changes: 48 additions & 7 deletions docs/src/content/docs/reference/threat-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,10 @@ safe-outputs:
threat-detection:
enabled: true # Enable/disable detection
prompt: "Focus on SQL injection" # Additional analysis instructions
steps: # Custom detection steps
steps: # Custom steps run before engine execution
- name: Setup Security Gateway
run: echo "Connecting to security gateway..."
post-steps: # Custom steps run after engine execution
- name: Custom Security Check
run: echo "Running additional checks"
```
Expand All @@ -90,7 +93,8 @@ safe-outputs:
| `prompt` | string | Custom instructions appended to default detection prompt |
| `engine` | string/object/false | AI engine config (`"copilot"`, full config object, or `false` for no AI) |
| `runs-on` | string/array/object | Runner for the detection job (default: inherits from workflow `runs-on`) |
| `steps` | array | Additional GitHub Actions steps to run after AI analysis |
| `steps` | array | Additional GitHub Actions steps to run **before** AI analysis (pre-steps) |
| `post-steps` | array | Additional GitHub Actions steps to run **after** AI analysis (post-steps) |

## AI-Based Detection (Default)

Expand Down Expand Up @@ -186,13 +190,32 @@ safe-outputs:

## Custom Detection Steps

Add specialized security scanning tools alongside or instead of AI detection:
Add specialized security scanning tools alongside or instead of AI detection. You can run steps **before** the AI engine (for setup, gateway connections, etc.) and steps **after** (for additional scanning based on AI results).

### Pre-Steps (`steps:`)

Steps defined under `steps:` run **before** the AI engine executes. Use these for setup tasks such as connecting to a private AI gateway, installing security tools, or preparing artifacts.

```yaml wrap
safe-outputs:
create-pull-request:
threat-detection:
steps:
- name: Connect to Security Gateway
run: |
echo "Setting up secure connection to analysis gateway..."
# Authentication and connection setup
```

### Post-Steps (`post-steps:`)

Steps defined under `post-steps:` run **after** the AI engine completes its analysis. Use these for additional security scanning, reporting, or cleanup.

```yaml wrap
safe-outputs:
create-pull-request:
threat-detection:
post-steps:
- name: Run Security Scanner
run: |
echo "Scanning agent output for threats..."
Expand All @@ -206,11 +229,11 @@ safe-outputs:

**Available Artifacts:** Custom steps have access to `/tmp/gh-aw/threat-detection/prompt.txt` (workflow prompt), `agent_output.json` (safe output items), and `aw.patch` (git patch file).

**Execution Order:** Download artifacts → Run AI analysis (if enabled) → Execute custom steps → Upload detection log.
**Execution Order:** Download artifacts → Execute pre-steps (`steps:`) → Run AI analysis (if enabled) → Execute post-steps (`post-steps:`) → Upload detection log.

## Example: LlamaGuard Integration

Use Ollama with LlamaGuard 3 for specialized threat detection:
Use Ollama with LlamaGuard 3 for specialized threat detection running after AI analysis:

```yaml wrap
---
Expand All @@ -219,7 +242,7 @@ engine: copilot
safe-outputs:
create-pull-request:
threat-detection:
steps:
post-steps:
- name: Ollama LlamaGuard 3 Scan
uses: actions/github-script@v8
with:
Expand Down Expand Up @@ -261,7 +284,7 @@ safe-outputs:
threat-detection:
prompt: "Check for authentication bypass vulnerabilities"
engine: copilot
steps:
post-steps:
- name: Static Analysis
run: |
# Run static analysis tool
Expand All @@ -273,6 +296,24 @@ safe-outputs:
path: /tmp/gh-aw/threat-detection/aw.patch
```

## Example: Private AI Gateway

Connect to a private AI gateway before running the detection engine:

```yaml wrap
safe-outputs:
create-pull-request:
threat-detection:
steps:
- name: Connect to AI Gateway
run: |
# Authenticate and set up connection to private AI gateway
echo "Setting up gateway connection..."
./scripts/setup-gateway.sh
engine:
id: copilot
```

## Error Handling

**When Threats Are Detected:**
Expand Down
9 changes: 8 additions & 1 deletion pkg/parser/schemas/main_workflow_schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -7756,7 +7756,14 @@
},
"steps": {
"type": "array",
"description": "Array of extra job steps to run after detection",
"description": "Array of extra job steps to run before engine execution",
"items": {
"$ref": "#/$defs/githubActionsStep"
}
},
"post-steps": {
"type": "array",
"description": "Array of extra job steps to run after engine execution",
"items": {
"$ref": "#/$defs/githubActionsStep"
}
Expand Down
47 changes: 36 additions & 11 deletions pkg/workflow/threat_detection.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ package workflow
import (
"encoding/json"
"fmt"
"maps"
"strings"

"github.com/github/gh-aw/pkg/constants"
Expand All @@ -14,7 +15,8 @@ var threatLog = logger.New("workflow:threat_detection")
// ThreatDetectionConfig holds configuration for threat detection in agent output
type ThreatDetectionConfig struct {
Prompt string `yaml:"prompt,omitempty"` // Additional custom prompt instructions to append
Steps []any `yaml:"steps,omitempty"` // Array of extra job steps
Steps []any `yaml:"steps,omitempty"` // Array of extra job steps to run before engine execution
PostSteps []any `yaml:"post-steps,omitempty"` // Array of extra job steps to run after engine execution
EngineConfig *EngineConfig `yaml:"engine-config,omitempty"` // Extended engine configuration for threat detection
EngineDisabled bool `yaml:"-"` // Internal flag: true when engine is explicitly set to false
RunsOn string `yaml:"runs-on,omitempty"` // Runner override for the detection job
Expand All @@ -24,7 +26,7 @@ type ThreatDetectionConfig struct {
// that actually executes. Returns false when the engine is disabled and no
// custom steps are configured, since the job would have nothing to run.
func (td *ThreatDetectionConfig) HasRunnableDetection() bool {
return !td.EngineDisabled || len(td.Steps) > 0
return !td.EngineDisabled || len(td.Steps) > 0 || len(td.PostSteps) > 0
}

// IsDetectionJobEnabled reports whether a detection job should be created for
Expand Down Expand Up @@ -108,13 +110,20 @@ func (c *Compiler) parseThreatDetectionConfig(outputMap map[string]any) *ThreatD
}
}

// Parse steps field
// Parse steps field (pre-execution steps, run before engine execution)
if steps, exists := configMap["steps"]; exists {
if stepsArray, ok := steps.([]any); ok {
threatConfig.Steps = stepsArray
}
}

// Parse post-steps field (post-execution steps, run after engine execution)
if postSteps, exists := configMap["post-steps"]; exists {
if postStepsArray, ok := postSteps.([]any); ok {
threatConfig.PostSteps = postStepsArray
}
}

// Parse runs-on field
if runOn, exists := configMap["runs-on"]; exists {
if runOnStr, ok := runOn.(string); ok {
Expand Down Expand Up @@ -144,7 +153,7 @@ func (c *Compiler) parseThreatDetectionConfig(outputMap map[string]any) *ThreatD
}
}

threatLog.Printf("Threat detection configured with custom prompt: %v, custom steps: %v", threatConfig.Prompt != "", len(threatConfig.Steps) > 0)
threatLog.Printf("Threat detection configured with custom prompt: %v, custom pre-steps: %v, custom post-steps: %v", threatConfig.Prompt != "", len(threatConfig.Steps) > 0, len(threatConfig.PostSteps) > 0)
return threatConfig
}
}
Expand Down Expand Up @@ -186,21 +195,26 @@ func (c *Compiler) buildDetectionJobSteps(data *WorkflowData) []string {
// Step 3: Prepare files - copies agent output files to expected paths
steps = append(steps, c.buildPrepareDetectionFilesStep()...)

// Step 4: Setup threat detection (github-script)
// Step 4: Custom pre-steps if configured (run before engine execution)
if len(data.SafeOutputs.ThreatDetection.Steps) > 0 {
steps = append(steps, c.buildCustomThreatDetectionSteps(data.SafeOutputs.ThreatDetection.Steps)...)
}
Comment on lines +198 to +201
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Custom threat-detection steps generated via buildCustomThreatDetectionSteps() are inserted without the detectionStepCondition if: guard. As a result, user-provided pre-steps will run even when the detection_guard determines run_detection == 'false' (e.g., when there are no agent outputs/patches), which can cause unexpected side effects or failures in runs where threat detection should be skipped. Consider injecting if: always() && steps.detection_guard.outputs.run_detection == 'true' into these custom steps by default (unless the user already provided an explicit if in the step map).

Copilot uses AI. Check for mistakes.

// Step 5: Setup threat detection (github-script)
steps = append(steps, c.buildThreatDetectionAnalysisStep(data)...)

// Step 5: Engine execution (AWF, no network)
// Step 6: Engine execution (AWF, no network)
steps = append(steps, c.buildDetectionEngineExecutionStep(data)...)

// Step 6: Custom steps if configured
if len(data.SafeOutputs.ThreatDetection.Steps) > 0 {
steps = append(steps, c.buildCustomThreatDetectionSteps(data.SafeOutputs.ThreatDetection.Steps)...)
// Step 7: Custom post-steps if configured (run after engine execution)
if len(data.SafeOutputs.ThreatDetection.PostSteps) > 0 {
steps = append(steps, c.buildCustomThreatDetectionSteps(data.SafeOutputs.ThreatDetection.PostSteps)...)
}

// Step 7: Upload detection-artifact
// Step 8: Upload detection-artifact
steps = append(steps, c.buildUploadDetectionLogStep(data)...)

// Step 8: Parse results, log extensively, and set job conclusion (single JS step)
// Step 9: Parse results, log extensively, and set job conclusion (single JS step)
steps = append(steps, c.buildDetectionConclusionStep()...)

threatLog.Printf("Generated %d detection job step lines", len(steps))
Expand Down Expand Up @@ -554,10 +568,21 @@ await main();`
}

// buildCustomThreatDetectionSteps builds YAML steps from user-configured threat detection steps.
// It injects the detection guard condition into each step unless an explicit if: condition is
// already set, ensuring custom steps only run when the detection_guard determines that detection
// should proceed and preventing unexpected side effects in runs with no agent outputs to analyze.
func (c *Compiler) buildCustomThreatDetectionSteps(steps []any) []string {
var result []string
for _, step := range steps {
if stepMap, ok := step.(map[string]any); ok {
// Inject the detection guard condition unless the user already provided an if: condition.
if _, hasIf := stepMap["if"]; !hasIf {
// Clone the map to avoid mutating the original config.
injected := make(map[string]any, len(stepMap)+1)
maps.Copy(injected, stepMap)
injected["if"] = detectionStepCondition
stepMap = injected
}
if stepYAML, err := ConvertStepToYAML(stepMap); err == nil {
result = append(result, stepYAML)
}
Expand Down
Loading
Loading