feat: bundle token usage as agent artifact#24315
Conversation
- Add ParseTokenUsageStepID constant for the token usage step - Add step id: token_usage to the parse_token_usage step so outputs are addressable - Update parse_token_usage.sh to write token totals to GITHUB_OUTPUT - Add input_tokens, output_tokens, cache_read_tokens, cache_write_tokens to agent job outputs - Update golden files and recompile lock files Agent-Logs-Url: https://github.com/github/gh-aw/sessions/3376d0c0-3332-4508-899b-9f4fdd7d82c1 Co-authored-by: pelikhan <4175913+pelikhan@users.noreply.github.com>
|
@copilot also write a usage.json file in the artifact folder so that it gets packaged |
- Add TokenUsageFilename constant ("usage.json")
- parse_token_usage.sh now writes /tmp/gh-aw/usage.json with JSON token totals
- Include /tmp/gh-aw/usage.json in the agent artifact upload when firewall is enabled
- Update golden files and recompile lock files
Agent-Logs-Url: https://github.com/github/gh-aw/sessions/8e8cf92a-1b36-4dfd-b5dd-ba2fd1798bb4
Co-authored-by: pelikhan <4175913+pelikhan@users.noreply.github.com>
Done in a2bc908. Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
There was a problem hiding this comment.
Pull request overview
This PR makes aggregated token usage from the firewall proxy available to downstream GitHub Actions jobs by exposing token totals as agent job outputs and by bundling a structured usage.json file into the agent artifact.
Changes:
- Adds an
idto the “Parse token usage for step summary” step so step outputs are addressable (steps.token_usage.outputs.*). - Wires aggregated token totals into the agent job’s
outputswhen the firewall is enabled. - Includes
/tmp/gh-aw/usage.jsonin the unified agent artifact upload and updates compiled workflow fixtures/lockfiles accordingly.
Reviewed changes
Copilot reviewed 190 out of 190 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
pkg/workflow/compiler_yaml_ai_execution.go |
Assigns a stable step id for the token usage parsing step so outputs can be referenced. |
pkg/workflow/compiler_main_job.go |
Exposes token totals as agent job outputs (firewall-enabled workflows). |
pkg/workflow/compiler_yaml_main_job.go |
Adds aggregated usage.json to the unified agent artifact paths (firewall-enabled workflows). |
pkg/constants/job_constants.go |
Introduces constants for the token usage step ID and aggregated JSON filename. |
pkg/workflow/testdata/TestWasmGolden_CompileFixtures/basic-copilot.golden |
Updates golden output to include the new step id, outputs, and artifact path. |
pkg/workflow/testdata/TestWasmGolden_CompileFixtures/with-imports.golden |
Updates golden output to include the new step id, outputs, and artifact path. |
.github/workflows/*.lock.yml |
Regenerates locked workflow YAML to include the new step id, job outputs, and usage.json artifact path. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
pkg/workflow/compiler_main_job.go
Outdated
| if isFirewallEnabled(data) { | ||
| outputs["input_tokens"] = fmt.Sprintf("${{ steps.%s.outputs.input_tokens }}", constants.ParseTokenUsageStepID) | ||
| outputs["output_tokens"] = fmt.Sprintf("${{ steps.%s.outputs.output_tokens }}", constants.ParseTokenUsageStepID) | ||
| outputs["cache_read_tokens"] = fmt.Sprintf("${{ steps.%s.outputs.cache_read_tokens }}", constants.ParseTokenUsageStepID) | ||
| outputs["cache_write_tokens"] = fmt.Sprintf("${{ steps.%s.outputs.cache_write_tokens }}", constants.ParseTokenUsageStepID) |
There was a problem hiding this comment.
Because the token-usage parsing step is marked continue-on-error: true and the underlying script can exit successfully without writing any $GITHUB_OUTPUT values (e.g., when token-usage.jsonl is missing/empty or AWF is older), these job outputs can end up as empty strings. That can break downstream jobs expecting numeric values. Consider adding expression defaults (e.g., || '0') for each of these outputs so downstream consumers always get a usable number.
|
@copilot rename usage.json to agent_usage.json since detection will have one too |
Agent-Logs-Url: https://github.com/github/gh-aw/sessions/81ee64df-cd3b-4872-ab91-ade0ed6ef386 Co-authored-by: pelikhan <4175913+pelikhan@users.noreply.github.com>
Done in bfc211d. Renamed to Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
…ifact Agent-Logs-Url: https://github.com/github/gh-aw/sessions/93cefac8-0ca2-4e90-8b18-2ce8e5c4468e Co-authored-by: pelikhan <4175913+pelikhan@users.noreply.github.com>
Done in efce56e. Removed the four token count entries from the agent job Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
The token usage step wrote to
$GITHUB_STEP_SUMMARYonly — no structured token data was available to third-party tools at runtime.Changes
parse_token_usage.sh— After building the markdown table, runs an additional awk pass to produce/tmp/gh-aw/agent_usage.jsonwith aggregated token totals as structured JSONcompiler_yaml_main_job.go— Includes/tmp/gh-aw/agent_usage.jsonin the unified artifact upload when the firewall is enabledpkg/constants/job_constants.go— AddsTokenUsageFilename = "agent_usage.json"constant; theagent_prefix distinguishes it from a future detection job artifact of the same shapeResult
An
agent_usage.jsonfile is written to/tmp/gh-aw/agent_usage.jsonand bundled in the agent artifact, providing structured token data for third-party tools without requiring step summary parsing:{"input_tokens":1234,"output_tokens":567,"cache_read_tokens":890,"cache_write_tokens":0}