[prompt-analysis] Copilot PR Prompt Analysis - 2025-12-12 #6231
Closed
Replies: 1 comment
-
|
⚓ Avast! This discussion be marked as outdated by Copilot PR Prompt Pattern Analysis. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
🤖 Copilot PR Prompt Pattern Analysis - 2025-12-12
Analysis Period: Last 30 days
Total PRs: 1000 | Merged: 786 (79.0%) | Closed: 209 (21.0%) | Open: 5
This analysis examines 1,000 Copilot-generated PRs to identify which prompt patterns correlate with successful merges versus closed PRs.
Key Findings
The analysis reveals three critical success factors:
Prompt Categories and Success Rates
Prompt Analysis
✅ Successful Prompt Patterns
Common characteristics in merged PRs:
Most common keywords in merged PRs:
copilot, agent, github, coding, workflow, details, https, summary, com, startExample successful prompts:
PR #5906: Fix TestGetActionPinsSorting expected count after adding action pin
TestGetActionPinsSortingexpected 26 action pins butaction_pins.jsoncontains 27 entries. ## C..."PR #5611: Update dev.md to demonstrate custom_agent and custom_instructions with expression rendering
dev.mdworkflow to showcase the newassign_to_agentAPI options added in the PR:custom_agentand `cu..."PR #5549: [WIP] Increase timeout duration to 45 minutes
❌ Unsuccessful Prompt Patterns
Common characteristics in closed PRs:
Most common keywords in closed PRs:
copilot, agent, coding, github, details, workflow, https, summary, start, mcpExample unsuccessful prompts:
PR #4600: [WIP] Add automatic init command when using add
PR #4528: [WIP] Update mcp.json to use mcpServers instead of servers
PR #5472: Add cache-memory artifact sanitization to threat detection jobs
Key Insights
Based on analyzing 1,000 PRs over 30 days, three patterns emerge:
Pattern 1: Prompts that reference specific files (
.go,.md,.yml, etc.) have 8.3 percentage points higher presence in merged PRs (91.1% vs 82.8%)Pattern 2: Including error messages or failure descriptions correlates with success - present in 48% of merged PRs vs only 38% of closed PRs
Pattern 3: The most successful prompt categories are Update (81.0%), Testing (80.7%), and Feature (80.2%), while generic "other" prompts have only 25% success rate
Recommendations
Based on today's analysis, here are actionable recommendations for writing successful Copilot prompts:
✅ DO:
Be specific with file references: Include actual filenames, paths, or extensions (e.g., "Update
pkg/cli/compile.go" rather than "Update the compiler")Provide error context: When fixing bugs, include error messages, stack traces, or failure descriptions to give the agent concrete context
Use clear action verbs: Start with specific verbs like "Fix", "Add", "Update", "Implement" followed by the target
Aim for 400-500 words: This appears to be the sweet spot - detailed enough to be specific but not overly verbose
Reference issues when relevant: ~75% of all PRs reference issues, showing this is a common and effective practice
❌ AVOID:
Generic requests: Vague prompts like "improve the code" or "make it better" correlate with lower success rates
Too brief: Prompts under 200 words may lack the necessary context for the agent to succeed
Missing technical context: Avoid omitting file names, function names, or error messages when they're relevant
Historical Trends
Trend Analysis: Over the past 3 days, the success rate has remained stable around 79%, with Update and Testing categories consistently showing the highest success rates above 80%.
Generated by Copilot PR Prompt Analysis Workflow (Run §20161857609)
Beta Was this translation helpful? Give feedback.
All reactions