π Daily Copilot Token Consumption Report β 2026-03-24 #22661
Closed
Replies: 1 comment
-
|
This discussion was automatically closed because it expired on 2026-03-27T11:35:23.744Z.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Executive Summary
Over the last 30 days, 89 workflow runs across 44 unique Copilot-powered workflows consumed 85,210,999 tokens (~85.2M). The data covers runs from 2026-02-20 through 2026-03-24, with a β48.8% reduction in tokens compared to the previous reporting period (2026-02-20: ~166.5M tokens, 195 runs). This significant drop reflects a lower run count (β54.4%), consistent with the weekend/holiday schedule pattern.
Key Highlights
Daily Syntax Error Quality Checkβ 11.1M tokens across 2 runs (13.0% of total)Issue Monsterβ 12 runs, 5.7M tokens totalπ Token Usage & Run Trends
Token consumption has varied significantly across reporting periods β from a high of 237.8M (2026-02-11, 378 runs) to today's 85.2M (89 runs). The current day reflects a lighter workload consistent with a weekday morning schedule snapshot.
π Average Tokens per Workflow Run
The average tokens-per-run metric highlights how efficiently each period used the model. The 2026-02-11 peak had ~629K avg tokens/run, while today shows ~957K β meaning fewer but heavier runs.
π Top 10 Workflows by Token Consumption
π‘ Insights & Recommendations
High-Token Workflows
Daily Syntax Error Quality Check β 11.1M tokens (13.0% of all consumption)
Functional Pragmatist β 5.7M tokens (6.7%) in a single run (78 turns, 17.2 min)
Smoke Multi PR β 4.1M tokens in a single test run (89 turns, 10.5 min)
Optimization Opportunities
Scope-limited quality checks: Workflows like
Daily Syntax Error Quality CheckandDaily Compiler Quality Checkcould target only changed files (viagit diff --name-only origin/main) instead of full-repo scans.Parallel batching vs sequential turns:
Issue Monster(12 runs, avg 11.7 turns) andContribution Check(6 runs, avg 29.7 turns) use many turns. Restructuring prompts to batch-process items in fewer turns could reduce overhead.Smoke test right-sizing: Smoke tests (
Smoke Multi PR,Agent Container Smoke Test) collectively consumed ~5.7M tokens. Consider tiered smoke tests β quick sanity checks vs. full integration tests.Cache-aware workflows: Workflows like
Glossary Maintainerandjsweepmay re-read large context each run. Using cache-memory to persist intermediate state could reduce re-reading.Per-Workflow Detailed Statistics (All 44 Workflows)
Historical Comparison
vs 2026-02-20 (previous report): β48.8% tokens, β54.4% runs
The per-run token average has been trending upward β from 426K (Jan 22) to 957K today β suggesting workflows are becoming more complex or tackling larger tasks per run, even as total run counts fluctuate with schedule patterns.
Methodology & Data Quality Notes
Methodology
memory/token-metrics/history.jsonlin repo memory branchData Quality Notes
Daily Copilot Token Consumption Report(this run) shows 0 tokens as it was still in-progress at log collection timeReferences:
Beta Was this translation helpful? Give feedback.
All reactions