Daily Copilot Token Consumption Report — 2026-03-31 #23678
Closed
Replies: 1 comment
-
|
This discussion has been marked as outdated by Daily Copilot Token Consumption Report. A newer discussion is available at Discussion #23864. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Executive Summary
On 2026-03-31, Copilot-powered agentic workflows consumed 80,923,350 tokens across 111 workflow runs covering 66 unique workflows, totalling 1,627 agent turns and 890 action minutes (~14.8 hours of compute). This represents a -51.4% decrease in token consumption and -43.1% fewer runs compared to the previous snapshot (2026-02-20).
Key Highlights
🏆 Top Workflows by Token Consumption
Top 10 Most Expensive Workflows
The top 10 workflows account for ~44.1% of total token consumption (35,714,360 tokens).
📊 Token Efficiency Analysis
The scatter plot above maps total tokens vs. total turns for all workflows (bubble size = number of runs). Workflows in the upper-right are high-cost and turn-heavy; those in the lower-left are lean and efficient.
Outliers to watch:
🌐 Token Distribution by Task Domain
General Automation dominates at 32.3% of all tokens — a broad category worth decomposing to identify optimization targets.
📈 Historical Trends
Historical Snapshot Comparison
Observations:
💡 Insights & Recommendations
High-Turn Workflows Driving Costs
Daily Syntax Error Quality Check — 90 turns, 4.5M tokens
claude-haiku-4-5) for syntax-only checksCopilot CLI Deep Research Agent — 68 turns, 5.2M tokens
max_turnsCode Simplifier — 66 turns, 4.2M tokens (single run)
agentic_fractionimprovements to move data-gathering to pre-stepsContribution Check — 24.8 avg turns over 6 runs (4.8M total)
Model Downgrade Opportunities
Several workflows were flagged with model_downgrade_available assessments in the raw data. Consider lighter models for:
gpt-4.1-miniorclaude-haiku-4-5gpt-4.1-miniEstimated potential savings: 15–30% of total tokens if model downgrades are applied to eligible workflows.
Workflows with No Token Data (14 workflows)
The following workflows ran but reported zero tokens (likely non-Copilot steps or completed before token logging):
Recommendation: Investigate
Smoke Copilot(56 action minutes, 0 tokens) — this may indicate a non-Copilot smoke test runner or a token reporting gap.Per-Workflow Detailed Statistics (All 66 Workflows)
Methodology & Data Notes
Methodology
gh-aw auditcovering last 30 daystoken_usageper run (not available for all runs)Data Quality Notes
in_progressat time of data collection and is excluded from token counts🎯 Action Items
Daily Syntax Error Quality Check— 90 turns is a red flag; move file enumeration to deterministic pre-stepsCode Simplifierper run — run per-package rather than whole-repo; cap at 30 turnsContribution Check— 6 runs × ~797K tokens is the largest multi-run cost; cache results or reduce frequencySmoke Copilot— 7 runs, 56 action minutes, 0 tokens logged is anomalousReferences:
Beta Was this translation helpful? Give feedback.
All reactions