Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions .github/workflows/claude-fix-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
name: Claude Fix Failed Tests

on:
workflow_run:
workflows: ["Tests"]
types: [completed]

jobs:
fix-tests:
if: ${{ github.event.workflow_run.conclusion == 'failure' && github.event.workflow_run.head_branch != 'main' }}
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
issues: write

steps:
- uses: actions/checkout@v4
with:
ref: ${{ github.event.workflow_run.head_branch }}
fetch-depth: 1
Comment on lines +18 to +21

Check warning

Code scanning / Semgrep OSS

Semgrep Finding: yaml.github-actions.security.workflow-run-target-code-checkout.workflow-run-target-code-checkout Warning

This GitHub Actions workflow file uses workflow_run and checks out code from the incoming pull request. When using workflow_run, the Action runs in the context of the target repository, which includes access to all repository secrets. Normally, this is safe because the Action only runs code from the target repository, not the incoming PR. However, by checking out the incoming PR code, you're now using the incoming code for the rest of the action. You may be inadvertently executing arbitrary code from the incoming PR with access to repository secrets, which would let an attacker steal repository secrets. This normally happens by running build scripts (e.g., npm build and make) or dependency installation scripts (e.g., python setup.py install). Audit your workflow file to make sure no code from the incoming PR is executed. Please see https://securitylab.github.com/research/github-actions-preventing-pwn-requests/ for additional mitigations.

- name: Download test results
uses: actions/download-artifact@v4
with:
name: test-results
path: TestResults/
run-id: ${{ github.event.workflow_run.id }}
github-token: ${{ secrets.GITHUB_TOKEN }}
continue-on-error: true

- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: |
The CI tests failed on branch ${{ github.event.workflow_run.head_branch }}.

1. Check TestResults/ for .trx files with failure details
2. If no artifacts, run: dotnet test src/SimSteward.Plugin.Tests/SimSteward.Plugin.Tests.csproj -c Release -v normal
3. Analyze the root cause of each failure
4. Fix the failing tests or the code they test
5. Verify fixes by running the tests again

Do NOT modify tests just to make them pass — fix the underlying code unless the test itself is wrong.
9 changes: 0 additions & 9 deletions .github/workflows/claude.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,12 +39,3 @@ jobs:
# This is an optional setting that allows Claude to read CI results on PRs
additional_permissions: |
actions: read

# Optional: Give a custom prompt to Claude. If this is not specified, Claude will perform the instructions specified in the comment that tagged it.
# prompt: 'Update the pull request description to include a summary of changes.'

# Optional: Add claude_args to customize behavior and configuration
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://code.claude.com/docs/en/cli-reference for available options
# claude_args: '--allowed-tools Bash(gh pr:*)'

30 changes: 0 additions & 30 deletions .github/workflows/secrets-scan.yml

This file was deleted.

4 changes: 0 additions & 4 deletions observability/local/.env.observability.example
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,6 @@ GRAFANA_STORAGE_PATH=
# Example (PowerShell): [Convert]::ToBase64String((1..48 | ForEach-Object { Get-Random -Maximum 256 }))
LOKI_PUSH_TOKEN=

# Path to the SimHub plugin data directory on the host. Alloy tails plugin-structured.jsonl from here.
# Example Windows: C:/Users/<your_username>/AppData/Local/SimHubWpf/PluginsData/SimSteward
SIMSTEWARD_DATA_PATH=

# Grafana login (compose substitutes into GF_SECURITY_ADMIN_*). Used only when Grafana has no DB yet.
# If you forgot the password, stop the stack, wipe the Grafana volume (npm run obs:wipe -- -Force -Grafana), then up again.
GRAFANA_ADMIN_USER=admin
Expand Down
52 changes: 9 additions & 43 deletions observability/local/config.alloy
Original file line number Diff line number Diff line change
@@ -1,43 +1,9 @@
// Grafana Alloy — tail plugin-structured.jsonl → Loki
// Docs: https://grafana.com/docs/alloy/latest/

local.file_match "simsteward_structured" {
path_targets = [{"__path__" = "/var/log/simsteward/plugin-structured.jsonl"}]
sync_period = "5s"
}

loki.source.file "simsteward_structured" {
targets = local.file_match.simsteward_structured.targets
forward_to = [loki.process.simsteward.receiver]

tail_from_end = true
}

loki.process "simsteward" {
forward_to = [loki.write.local.receiver]

// Extract low-cardinality labels from JSON; everything else stays in the log line.
stage.json {
expressions = {
level = "level",
component = "component",
event = "event",
domain = "domain",
}
}

stage.labels {
values = {
level = "",
component = "",
event = "",
domain = "",
}
}
}

loki.write "local" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
}
// RETIRED — Alloy is no longer part of the observability stack.
//
// The SimHub plugin (PluginLogger.cs) now pushes plugin-structured.jsonl entries directly
// to Loki via LokiPushClient at flush time (~500ms batches), replacing this file-tail pipeline.
//
// Claude Code token metrics are pushed directly from ~/.claude/hooks/loki-log.js at session-end.
//
// This file is kept for reference only. Remove the alloy service from docker-compose.yml
// before starting the stack.
45 changes: 0 additions & 45 deletions observability/local/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,45 +32,11 @@ services:
timeout: 5s
retries: 10

otel-collector:
image: otel/opentelemetry-collector-contrib:0.115.1
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml:ro
ports:
- "4317:4317"
- "4318:4318"
# Host 18889 avoids conflict with other tools binding Windows :8889; Prometheus still scrapes otel-collector:8889 on the Docker network.
- "18889:8889"
- "13133:13133"

prometheus:
image: prom/prometheus:v2.55.1
depends_on:
- otel-collector
command:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention.time=15d"
- "--web.enable-lifecycle"
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- ${GRAFANA_STORAGE_PATH:-S:/sim-steward-grafana-storage}/prometheus:/prometheus
healthcheck:
test: ["CMD", "wget", "-q", "-O", "-", "http://127.0.0.1:9090/-/healthy"]
interval: 10s
timeout: 5s
retries: 10

grafana:
image: grafana/grafana:11.2.0
depends_on:
loki:
condition: service_healthy
prometheus:
condition: service_healthy
ports:
- "3000:3000"
environment:
Expand All @@ -81,17 +47,6 @@ services:
- ${GRAFANA_STORAGE_PATH:-S:/sim-steward-grafana-storage}/grafana:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning:ro

alloy:
image: grafana/alloy:v1.5.1
depends_on:
loki:
condition: service_healthy
volumes:
- ./config.alloy:/etc/alloy/config.alloy:ro
- ${SIMSTEWARD_DATA_PATH}:/var/log/simsteward:ro
- ${GRAFANA_STORAGE_PATH:-S:/sim-steward-grafana-storage}/alloy:/tmp/positions
command: ["run", "/etc/alloy/config.alloy", "--storage.path=/tmp/positions"]

data-api:
build: ./data-api
ports:
Expand Down
Loading
Loading