| marp | true |
|---|---|
| theme | default |
| paginate | true |
| backgroundColor | |
| style | /* Cronos Public Services Theme */ @import 'default'; :root { --cronos-blue: #0084C7; --cronos-dark-blue: #003D5B; --cronos-light-blue: #E6F2F8; --cronos-gray: #F5F5F5; --cronos-text: #333333; } section { font-family: 'Arial', 'Helvetica', sans-serif; font-size: 32px; padding: 30px 50px; background-color: white; } /* Title Slide */ section.title { background-image: linear-gradient(to right, rgba(255,255,255,0.9), rgba(255,255,255,0.7)), url('data:image/svg+xml;utf8,<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1920 1080"><rect fill="%23E6F2F8" width="1920" height="1080"/><path fill="%230084C7" opacity="0.1" d="M1920,0 L1920,1080 L960,1080 C960,540 1440,540 1440,0 Z"/></svg>'); background-size: cover; background-position: center; justify-content: center; align-items: center; text-align: center; } section.title h1 { color: var(--cronos-blue); font-size: 56px; font-weight: bold; margin-bottom: 20px; } section.title .subtitle { color: var(--cronos-text); font-size: 32px; margin-bottom: 40px; } /* Headers */ h1 { color: var(--cronos-blue); font-size: 42px; margin-top: 0; margin-bottom: 20px; padding-bottom: 10px; border-bottom: 2px solid var(--cronos-blue); } h2 { color: var(--cronos-dark-blue); font-size: 36px; margin-top: 0; margin-bottom: 15px; } h3 { color: var(--cronos-blue); font-size: 28px; margin-top: 10px; margin-bottom: 10px; } /* Text */ p { color: var(--cronos-text); font-size: 24px; line-height: 1.5; margin-top: 0; margin-bottom: 12px; } ul, ol { color: var(--cronos-text); font-size: 24px; line-height: 1.6; margin-left: 30px; margin-top: 10px; margin-bottom: 15px; } li { margin-bottom: 8px; } /* Code Blocks */ pre { background-color: var(--cronos-gray); border-left: 4px solid var(--cronos-blue); padding: 15px; border-radius: 4px; overflow-x: auto; font-size: 20px; margin: 15px 0; } code { background-color: var(--cronos-light-blue); padding: 3px 8px; border-radius: 3px; font-family: 'Consolas', 'Monaco', monospace; font-size: 20px; } /* Tables */ table { border-collapse: collapse; width: 100%; margin: 20px 0; font-size: 22px; } th { background-color: var(--cronos-blue); color: white; padding: 15px; text-align: left; font-weight: bold; font-size: 22px; } td { padding: 15px; border-bottom: 1px solid var(--cronos-light-blue); font-size: 22px; } tr:nth-child(even) { background-color: var(--cronos-gray); } /* Blockquotes */ blockquote { border-left: 4px solid var(--cronos-blue); padding-left: 20px; margin: 20px 0; font-style: italic; color: var(--cronos-dark-blue); } /* Emphasis */ strong { color: var(--cronos-dark-blue); font-weight: bold; } em { color: var(--cronos-blue); font-style: italic; } /* Links */ a { color: var(--cronos-blue); text-decoration: none; border-bottom: 1px solid transparent; transition: border-bottom 0.3s; } a:hover { border-bottom: 1px solid var(--cronos-blue); } /* Slide Numbers */ section::after { content: attr(data-marpit-pagination); position: absolute; bottom: 20px; right: 60px; color: var(--cronos-text); font-size: 12px; } /* Hide pagination on title slide */ section.title::after { display: none; } /* Two Column Layout */ .columns { display: grid; grid-template-columns: 1fr 1fr; gap: 40px; align-items: start; } |
Morning Sessions:
- 09:00 - Welcome & Kickoff
- 09:20 - Chapter 1: LLM Fundamentals
- 09:40 - Chapter 2 & 3: GitHub Copilot & IDE
- 10:15 - Exercise 1: Refactor Rescue
- 10:40 - Break
- 10:55 - Chapter 4 & 5: Prompt Engineering & Advanced Workflows
- 12:00 - Lunch
Afternoon Sessions:
- 13:00 - Chapter 6: GitHub Platform Integration
- 14:00 - Chapter 7: AI Security Best Practices
- 14:30 - Break
- 14:45 - Exercise 2: Context Matters
- 16:15 - Chapter 8: Making Impact
- 16:45 - Wrap-up & Q&A
- 17:00 - End
- Who are you?
- What are your expectations for today?
- Your specific goal(s)?
Balanced Format:
- Theory & Explanations - Understand the why
- Live demos - See it in action
- Hands-on exercises - Practice yourself
- Team discussions - Learn from each other
Ground Rules:
- Questions welcome anytime
- Share experiences
- Focus on practical application
- We'll explain concepts before diving in
Core Concept:
- Trained on vast code & text datasets
- Pattern recognition at scale
- Context-aware predictions
- Not just "autocomplete"
Copilot's Foundation:
- OpenAI Codex base
- Fine-tuned for programming
- Multi-language support
- Continuous improvement
Training Process:
1. Pre-training (yearly)
ββ Download ~10TB of text
ββ 6,000 GPUs cluster
ββ Compress into neural network
ββ Cost: ~$2M, Time: ~12 days
ββ Result: Base model
2. Fine-tuning (weekly)
ββ 100K+ Q&A examples
ββ Human feedback loop
ββ Train for ~1 day
ββ Result: Assistant model
Key Insight: Billions of parameters adjusted to predict the next word
How It Works:
- Input: "The cat sat on a..."
- Neural network processes
- Output: "mat" (97% probability)
- Billions of parameters working together
The Mystery:
- We can measure it works
- We can't explain WHY it works
- Parameters are incomprehensible to humans
Input
β
[Neural Network]
100B parameters
β
Most Probable
Output
Remember:
- AI gives the most probable answer
- Not necessarily the most correct
- Based on patterns in training data
Critical Understanding:
- LLMs always output what's statistically likely
- Trained on code that exists, including bugs
- Can confidently suggest incorrect solutions
- Will hallucinate plausible-sounding APIs
Example Hallucinations:
// AI might suggest this API that doesn't exist:
var result = String.ParseJson(jsonString);
// Or mix syntax from different languages:
public async Task<> getData() => { // Mixed C#/JS
return await fetch(url);
}Bottom Line: Always verify AI suggestions - they're optimized for probability, not correctness
Key Differences:
| Traditional Code | LLM/Copilot |
|---|---|
| Same input β Same output | Same input β Varied outputs |
| Rule-based logic | Pattern-based suggestions |
| 100% predictable | Probabilistic responses |
| Binary correct/incorrect | Confidence scores |
Implication: Verify suggestions, don't assume correctness
Input: "function calculateTotal"
Tokens: ["function", "calculate", "Total"]
ββ ~4000 token window limit ββ
Context Window:
- Current file content
- Open tabs (relevant)
- Recent edits
- Comments & docstrings
Best Practice: Keep relevant context visible
Common Hallucinations:
- Inventing APIs that don't exist
- Mixing syntax between languages
- Creating plausible but wrong logic
- Referencing non-existent files
Your Guardrails:
- β Review before accepting
- β Run tests immediately
- β Check API documentation
- β Trust but verify
| Generic AI | GitHub Copilot |
|---|---|
| General knowledge | Code-focused training |
| No IDE integration | Deep IDE integration |
| Copy-paste workflow | In-line suggestions |
| Limited code context | Full repository awareness |
| Generic responses | Language-specific patterns |
Bottom Line: Built by developers, for developers
Learning Revolution:
- π Instant explanations in context
- π Discover patterns across languages
- π Explore new tech safely
- β±οΈ Get unstuck in seconds, not hours
- π― Focus on concepts, not syntax
Example Prompts:
- "Explain this LINQ query and show SQL equivalent"
- "How would this pattern work in Python?"
- "Explain this like I'm a junior developer"
- "Compare async/await in C# vs JavaScript"
Bottom Line: Accelerate from months to weeks in mastering new skills
AI Handles the Tedious:
- β Boilerplate generation
- β Test scaffolding
- β Documentation
- β Data models
- β Error parsing
- β Repetitive refactoring
You Focus On:
- π§ Business logic
- π‘ Creative solutions
- π― User experience
- π Performance optimization
- ποΈ Architecture decisions
- π€ Stakeholder value
Result: More time for meaningful, impactful work
Generic LLMs:
- Broad knowledge base
- General purpose text generation
- No code specialization
- Limited IDE integration
GitHub Copilot:
- Code-focused training
- Developer-specific fine-tuning
- Deep IDE integration
- Repository context awareness
The Evolution: LLM β Codex β GitHub Copilot β Your AI Pair Programmer
Unique Benefits:
- Trained on public GitHub repositories
- Understands project patterns
- PR and issue integration
- Security scanning built-in
- Team collaboration features
Not Just Code Completion:
- Documentation generation
- Test creation
- Code review assistance
- Security remediation
What Copilot Does:
- Processes code locally first
- Sends context for suggestions
- No training on private repos (Business/Enterprise)
- Configurable data retention
What You Control:
- File/folder exclusions
- Organization policies
- Audit logging
- Data residency options
| Individual | Business | Enterprise |
|---|---|---|
| Personal use | Team features | Full platform |
| GitHub Copilot in IDE | Everything in Individual | Everything in Business |
| GitHub Copilot Chat | Private code security | Advanced security |
| CLI assistance | Knowledge bases | Custom models |
| $10/month | $21/user/month | $39/user/month |
What's New: Premium requests for advanced AI models
What are Premium Requests?
- Access to more advanced AI models (Claude, Gemini)
- Higher quality code generation
- Better understanding of complex requirements
- More accurate refactoring suggestions
Usage Limits:
- Business: 300 premium requests/user/month
- Enterprise: 1000 premium requests
When to Use Premium:
- Complex architectural decisions
- Critical business logic
- Large-scale refactoring
- Security-sensitive code
β What Works:
- Clear, specific prompts
- Iterative refinement
- Context-rich development
- Test-driven approach
β What Doesn't:
- Vague requirements
- Blind acceptance
- Ignoring suggestions
- Fighting the tool
Myth: "It will replace developers"
Reality: Augments human creativity
Myth: "It writes perfect code"
Reality: Requires review and refinement
Myth: "It knows my entire codebase"
Reality: Limited context window
Myth: "It's just fancy autocomplete"
Reality: Understands intent and patterns
Your Daily Workflow Changes:
| Before Copilot | With Copilot |
|---|---|
| Google β Stack Overflow β Implement | Describe β Generate β Refine |
| Write boilerplate manually | Tab to accept suggestions |
| Context switch for docs | In-line documentation help |
| Debug alone | AI-assisted troubleshooting |
βββββββββββββββββββββββ
β Inline Completions β ββββββ
βββββββββββββββββββββββ β
βΌ
βββββββββββββββββββββββ βββββββββββββββ
β Chat/Ask Panel β ββΆβ Your Code β
βββββββββββββββββββββββ βββββββββββββββ
β²
βββββββββββββββββββββββ β
β Agent Mode β ββββββ
βββββββββββββββββββββββ
Today's Journey:
- Start with inline (quick wins)
- Master Chat/Ask (morning focus)
- Agent Mode (afternoon power)
Quick Wins:
- Function signatures β implementations
- Test case generation
- Repetitive patterns
- Boilerplate code
Common Pitfalls:
- Accepting without reading
- Breaking naming conventions
- Inconsistent style
- Missing edge cases
Key Features:
- π¬ Natural language queries
- π Code block responses
- π§ Slash commands
- π File attachments
- π Conversation history
Essential Commands:
/explain- Understand code/fix- Debug errors/tests- Generate tests/doc- Add documentation
Select Code ββ
β
@workspace ββΌββΆ Better Suggestions
β
Paste Snippets ββ€
β
Open Files ββ
Context Sources:
- Highlighted code blocks
- @workspace mentions
- Open editor tabs
- Pasted examples
- File references
Scenario: Transform problematic legacy module
Steps:
- Open messy legacy file
- Ask Copilot to explain current behavior
- Identify code smells together
- Generate refactoring plan
- Apply changes incrementally
- Generate tests for validation
// Legacy code example
public ArrayList ProcessData(ArrayList d) {
var result = new ArrayList();
for(int i=0; i<d.Count; i++) {
dynamic item = d[i];
if(item.active == true && item.value > 10) {
result.Add(new {
name = item.name.ToString().ToUpper(),
val = item.value * 1.1
});
}
}
return result;
}Chat Prompt: "Explain this function and identify improvement opportunities"
Copilot's Analysis:
- Poor naming conventions
- No type safety
- Mixed concerns
- No error handling
- Outdated syntax
Suggested Refactoring:
- Modern ES6+ syntax
- Type definitions
- Separate filter/map
- Descriptive names
- Add validation
// After refactoring:
public List<ProcessedItem>
ProcessActiveItems(List<DataItem> items) {
const decimal MIN_VALUE = 10;
const decimal MULTIPLIER = 1.1m;
return items
.Where(item =>
item.Active &&
item.Value > MIN_VALUE)
.Select(item => new ProcessedItem {
Name = item.Name.ToUpper(),
Value = item.Value * MULTIPLIER
})
.ToList();
}Key Improvements:
- β Type safety with generics
- β Functional LINQ approach
- β Named constants
- β Single responsibility
- β Clear method intent
- β No dynamic types
Benefits:
- Easier to test
- More maintainable
- Better performance
- Compile-time safety
Chat Prompt: "Generate comprehensive tests for both versions"
[TestClass]
public class ProcessActiveItemsTests {
[TestMethod]
public void ProcessActiveItems_FiltersInactiveItems() {
// Test inactive items are excluded
}
[TestMethod]
public void ProcessActiveItems_FiltersItemsBelowThreshold() {
// Test value threshold works
}
[TestMethod]
public void ProcessActiveItems_TransformsValidItemsCorrectly() {
// Test uppercase and multiplier
}
}Copilot generates: Complete test suite with edge cases, fixtures, and assertions
β Common Pitfalls:
- Vague prompts
- Missing context
- Accepting blindly
- Over-engineering
- Wrong assumptions
β How to Fix:
- Be specific with requirements
- Select relevant code first
- Always review suggestions
- Ask for simple solutions
- Provide constraints upfront
Remember: Copilot is your pair programmer, not your replacement
Scenario: You've inherited a supermarket receipt system with special deals
Your Tasks:
- π Use Chat to understand the pricing logic
- π Identify code smells (Long Method, Feature Envy)
- β Generate tests for 90%+ coverage
- π§ Refactor while maintaining functionality
- π Add new feature: 10% bundle discounts
Time: 25 minutes
Example Deals:
- πͺ₯ Buy 2 toothbrushes, get 1 free (β¬0.99 each)
- π 20% discount on apples (β¬1.99/kg)
- π 10% discount on rice (β¬2.49/bag)
- π¦· 5 tubes toothpaste for β¬7.49 (β¬1.79 each)
- π 2 boxes cherry tomatoes for β¬0.99 (β¬0.69 each)
New Bundle Feature:
- Bundle: 1 toothbrush + 1 toothpaste = 10% off total
- Only complete bundles get discount
- Partial bundles = no discount
Repository: Supermarket Receipt Refactoring Kata
- Start with
mainbranch (no tests)
Requirements:
- Achieve 90%+ test coverage
- Maintain all existing functionality
- Apply relevant refactorings
Deliverables:
- Comprehensive test suite
- Refactored receipt system
- New bundle discount feature (10% off complete bundles)
DO:
- β Select code before asking
- β Be specific about requirements
- β Ask for explanations first
- β Request tests with changes
- β Iterate on suggestions
DON'T:
- β Use Agent Mode (Chat only!)
- β Accept without review
- β Rush through changes
- β Ignore test failures
Repository: https://github.com/OnCore-NV/Refactoring-Kata
Exercise Goals:
- π Understand pricing logic with Chat
- π Identify code smells
- β Generate tests (90%+ coverage)
- π§ Refactor the code
- π Add bundle discount feature
Remember: Chat only, no Agent Mode!
Timeline (25 min):
Start (0:00)
βββ Understand code (0:00-0:05)
βββ Identify issues (0:05-0:10)
βββ Plan refactoring (0:10-0:15)
βββ Implement changes (0:15-0:20)
βββ Write tests (0:20-0:25)
Deliverables:
- Test suite with 90%+ coverage
- Refactored receipt system
- Bundle discount (10% off complete bundles)
Share Your Experience:
- What surprised you?
- Most useful prompt?
- Biggest challenge?
- Key learning?
Common Discoveries:
- Context matters immensely
- Iterative prompting works best
- Tests reveal hidden issues
- AI explanations aid understanding
Why It Matters:
- Better prompts = Better code
- Reduces iterations needed
- Saves debugging time
- Improves code quality
- Increases productivity
Key Principle: AI responds to what you ask, not what you meant
The Prompt Spectrum:
Vague βββββββββββββββββΊ Specific
"Make a function"
β
"Create a validation function"
β
"Create an email validation function"
β
"Create a C# method that validates
email format using regex, returns
bool, handles null input"
1. Context - Set the scene 2. Task - What needs to be done 3. Constraints - Rules and requirements 4. Format - Expected output structure 5. Examples - When helpful
// Poor prompt:
// "fix this"
// Good prompt:
// "Refactor this C# method to use async/await pattern,
// maintain all existing functionality, add proper error
// handling with try-catch, and include XML documentation"Code Generation:
"Create a [language] [type] that:
- [requirement 1]
- [requirement 2]
- Uses [specific pattern]
- Returns [type]"
Debugging:
"This [language] code throws [error].
Expected: [behavior]
Actual: [what happens]
Please identify and fix the issue."
Refactoring:
"Refactor this code to:
- Follow [principle/pattern]
- Improve [metric]
- Maintain [requirement]
- Use [technology]"
Learning:
"Explain this [concept/code]:
- Like I'm a [level] developer
- Include [specific aspects]
- Provide [examples]"
Code Context:
- Related classes/interfaces
- Method signatures
- Data structures
- Business rules
Technical Context:
- Framework version
- Target environment
- Performance requirements
- Security constraints
Example with Context:
// Context: This runs in a high-traffic web API with
// strict 100ms response time requirement
// Task: Optimize this product search to use cachingInitial: "Create a user service"
β
Better: "Create a C# service class for user management"
β
Refined: "Create a C# service class with CRUD operations
for users, using repository pattern"
β
Optimal: "Create a C# UserService class implementing
IUserService with async CRUD methods, using
IUserRepository, include logging and validation"
Pro Tip: Each iteration builds on the previous - don't start over!
β Too Vague:
- "Make it better"
- "Fix the bug"
- "Add some tests"
- "Improve performance"
β Too Broad:
- "Create an entire application"
- "Refactor everything"
- "Add all features"
β Missing Context:
- No language specified
- No requirements stated
- No constraints mentioned
- No expected behavior
β Conflicting Instructions:
- "Make it simple but handle all edge cases"
- "Quick solution with perfect error handling"
CRISP Framework:
- Context - Set the scene
- Role - Define AI's role
- Instructions - Clear steps
- Specifics - Details matter
- Product - Expected output
Example: "As a senior developer (Role), refactor this legacy method (Context) to use modern C# patterns and LINQ (Instructions), maintaining all unit tests (Specifics), and return the updated code with async support (Product)"
STAR Method:
- Situation - Current state
- Task - What needs doing
- Action - Steps to take
- Result - Expected outcome
Example: "Situation: Legacy code with nested loops Task: Optimize for performance Action: Use LINQ and parallel processing Result: 50% faster execution time"
1. Chain of Thought:
"First, analyze this code for issues.
Then, suggest improvements.
Finally, implement the top 3 improvements."
2. Few-Shot Examples:
"Convert to this pattern:
Input: oldMethod() { sync code }
Output: async newMethod() { await code }
Now convert: myFunction() { ... }"
3. Role-Based:
"As a security expert, review this code for vulnerabilities"
"As a performance engineer, optimize this query"
I want you to become my Prompt engineer. Your goal is to help me craft the best possible prompt for my needs. The prompt will be used by you <OpenAI, copilot, etc>.
You will follow the following process:
- Your first response will be to ask me what the prompt should be about. I will provide my answer, but we will need to improve it through continual iterations by going through the next steps.
- Based on my input, you will generate 2 sections:
- Revised prompt (provide your rewritten prompt. It should be clear, concise, and easily understood by you)
- Questions (ask any relevant questions pertaining to what additional information is needed from me to improve the prompt)
- We will continue this iterative process with me providing additional information to you and you updating the prompt in the Revised prompt section until I say we are done.
Building on Exercise 1: Now that you have a clean receipt system, let's practice prompt refinement
New Requirement: "Add a loyalty program to the receipt system"
Your Task:
- Start with this vague prompt in Copilot Chat
- Iteratively refine it using CRISP framework
- Document improvements at each step
- Implement the best solution
Time: 15 minutes
Vague Prompt:
Add loyalty points to receipts
CRISP-Enhanced Prompt:
// CONTEXT: Extending our refactored SupermarketReceipt system
// ROLE: Act as a senior developer following SOLID principles
// INTENT: Add loyalty program that calculates points by tier
// SPECIFIC: 1pt/β¬1, Tiers: Bronze/Silver/Gold (0/500/1000)
// Gold gets 2x points on produce, Silver gets 1.5x
// PATTERN: Use existing ISpecialDeal interface pattern
// Example: β¬50 receipt (β¬20 produce) = 70pts for Gold tierResult: A first try that is much closer to what we actually want.
Vague
- Generic point system
- No integration with receipts
- Missing business rules
- Hardcoded values
CRISP:
- Tier-based rewards
- Receipt integration
- Configurable rules
- Test coverage included
Key Insight: Context + Specificity = Better Code
Discussion Points:
- How did output quality change?
- Which additions made the biggest difference?
- Time saved with better prompts?
Key Insights:
- Specificity reduces rework
- Context prevents assumptions
- Examples guide style
- Constraints ensure compliance
Create Templates For:
- Common refactoring tasks
- Standard components
- Test generation
- Documentation
- Bug fixes
- Code reviews
Example Team Template:
"Generate C# unit tests for [method]:
- Use xUnit framework
- Follow AAA pattern
- Include edge cases
- Mock dependencies with Moq
- Aim for 100% coverage"
API Endpoint:
"Create a REST endpoint:
- POST /api/orders
- Accept OrderDto
- Validate required fields
- Return 201 with location
- Handle conflicts with 409
- Use MediatR pattern"
- Write integration test using TestContainers.
Data Access:
"Implement repository method:
- Use EF Core with includes
- Sort by LastLogin desc
- Support pagination
- Return IQueryable
- Add index hints"
π General Coding:
- GPT-4.1 - Fast, accurate default
- GPT-4o - Low latency + visuals
- Claude Sonnet 3.7 - Structured output
π§ Deep Reasoning:
- o3 - Multi-step problems
- Claude Opus 4 - Complex analysis
- Claude Sonnet 4 - Balanced power
- Gemini 2.5 Pro - Large codebases
β‘ Quick Tasks:
- o4-mini - Fastest responses
- Claude Sonnet 3.5 - Quick syntax help
- Gemini 2.0 Flash - Real-time + visuals
ποΈ Agent Mode Ready: GPT-4.1, GPT-4o, all Claude Sonnets
ποΈ Visual Understanding: GPT models, Gemini Flash, Claude 4
Pro Tip: GPT-4.1 is the reliable default, use specialized models for specific needs
Create Your Library:
## Refactoring Template
"Refactor this C# code to:
- Follow SOLID principles
- Use async/await patterns
- Implement dependency injection
- Add proper exception handling"
## Debugging Template
"This C# code throws [exception type].
Expected: [behavior]
Actual: [error message]
Stack trace: [trace]
Help me identify and fix the issue."Pro Tip: Save templates in team knowledge base
Copilot in Terminal:
- Complex command construction
- Script generation
- Error interpretation
- Pipeline creation
Example Prompts:
# "Find all C# files modified in last week"
find . -name "*.cs" -mtime -7
# "Build and run tests for solution"
dotnet build && dotnet test
# "Create NuGet package with version"
dotnet pack -c Release -p:PackageVersion=$(date +%Y.%m.%d)What Goes in Knowledge Base:
- Coding standards
- Architecture decisions
- Common patterns
- API documentation
- Troubleshooting guides
How Copilot Uses It:
- Contextual suggestions
- Consistent patterns
- Team-specific solutions
- Reduced hallucinations
Exclusion Options:
// .copilotignore
secrets/
*.env
**/credentials/**
private-docs/Organization Policies:
- Data retention settings
- Telemetry controls
- Audit requirements
- Compliance rules
You ββTaskβββΆ Agent Mode βββΆ Plan βββΆ Approve
β
βΌ
Complete βββ Review βββ Execute
When to Use:
- Multi-file changes
- Complex refactoring
- Feature implementation
- Systematic updates
When NOT to Use:
- Learning/exploration
- Critical business logic
- Without review time
Built-in Protections:
- Plan Phase: Review before execution
- Approval Required: Explicit consent
- Incremental Application: Step-by-step
- Rollback Capability: Undo changes
Your Responsibilities:
- Read the plan carefully
- Test after execution
- Maintain git history
- Document decisions
Live Demo: Converting our refactored receipt library into a web API
CRISP Prompt:
// CONTEXT: Refactored receipt library
// ROLE: API architect
// INTENT: Transform to Web API
// SPECIFIC: REST endpoints:
// - POST /api/receipts
// - GET /api/receipts/{id}
// - POST /api/deals
// PATTERN: RESTful, DTOs, 201sWatch For:
- Agent's planning phase
- File structure choices
- Controller generation
- DTO mappings
Agent's Plan:
- Create ASP.NET project
- Add receipt controller
- Create API DTOs
- Map domain β DTOs
- Configure DI
- Add Swagger docs
Your Review:
- Check architecture
- Verify endpoints
- Approve/modify plan
Results:
- Clean separation
- Proper HTTP semantics
- Testable design
Workflow:
- Copy error to Chat
- Add code context
- Include stack trace
- Get explanation
- Apply fix
- Verify solution
Effective Prompts:
- Full error message
- Relevant code snippet
- Expected behavior
- What you've tried
- Environment details
Test-Driven Development:
1. Write test description
2. Let Copilot generate test
3. Implement to pass test
4. Refactor with confidence
Documentation-First:
1. Write function signature
2. Add detailed JSDoc
3. Let Copilot implement
4. Review and refine
Common Topics:
- Context window limits
- Multi-language projects
- Integration with CI/CD
- Performance impact
- Learning resources
Remember:
- No question too basic
- Share your use cases
- Learn from each other
Morning Focus: Individual productivity in IDE
Afternoon Shift: Team collaboration on platform
Local Development βββΆ Push to GitHub βββΆ PR Creation
β
βΌ
Team Collaboration βββ AI Review
You'll Learn:
- PR automation features
- AI-powered code review
- Issue management with AI
- Security scanning integration
- Team knowledge sharing
You'll Do:
- Context-driven exercise
- Measure AI impact
- Build action plan
Quick Tasks:
- Note one morning insight
- Think about team challenges
- Prepare afternoon questions
Back at 13:00!
Share Your Morning Insights:
- What was your key insight?
- Which feature surprised you?
- Team challenges identified?
Automatic Features:
- PR summary generation
- Change categorization
- Impact analysis
- Test plan suggestions
How It Works:
Create PR
β
Copilot Analyzes
β
Generate Summary
β
Suggest Tests
β
Human Review
Setup:
- Add Copilot as reviewer
- Receives code analysis
- Posts review comments
- Suggests improvements
Benefits:
- Consistent review standards
- Catches common issues
- Never tired or rushed
- Frees human reviewers
Remember: Supplement, don't replace human review
Workflow Example:
Discuss in Chat βββΆ Identify Tasks βββΆ Generate Issues βββΆ Assign & Track
Issue Templates with AI:
- Bug report generation
- Feature request drafting
- Technical debt documentation
- Enhancement proposals
What to Include:
- Architecture diagrams
- Coding standards
- Onboarding guides
- Common solutions
- Decision records
Query Examples:
- "How do we handle authentication?"
- "What's our testing strategy?"
- "Database schema conventions?"
Popular Extensions:
- Jira integration
- Slack notifications
- Documentation search
- API references
- Security tools
Custom Extensions:
- Team-specific tools
- Internal APIs
- Workflow automation
Live Demo Flow:
- Push feature branch
- Create PR
- Watch summary generation
- Review test suggestions
- Edit/enhance as needed
What to Notice:
- Accuracy of summary
- Suggested reviewers
- Test plan quality
- Label recommendations
Review Focus Areas:
- Security vulnerabilities
- Performance issues
- Code style violations
- Best practice suggestions
- Bug risk identification
Interaction Example:
- Copilot comments
- Developer responses
- Suggestion application
- Re-review cycle
DO:
- β Review all AI suggestions
- β Add human context
- β Verify security implications
- β Test thoroughly
- β Document decisions
DON'T:
- β Auto-merge AI approvals
- β Skip human review
- β Ignore domain knowledge
- β Trust blindly
- β Blame AI for bugs
NEVER Share:
- π API keys or tokens
- ποΈ Passwords or credentials
- π³ Customer PII data
- π’ Proprietary algorithms
- π Sensitive business data
ALWAYS:
- β Use enterprise AI tools
- β Review generated code
- β Check data retention policies
- β Follow company guidelines
- β Report security concerns
β Free/Consumer AI:
- Data used for training
- No privacy guarantees
- Logs conversations
- Public model updates
- No audit trail
Risk: Code becomes public!
β GitHub Copilot Business / Enterprise:
- Zero data retention
- No training on your code
- SOC 2 compliant
- Audit logs available
- Enterprise controls
Safe: Your code stays yours!
Real Incidents:
- API Key Exposure: Developer asked ChatGPT to debug code with live AWS keys
- Customer Data Leak: Pasted real customer database queries into free AI
- Algorithm Theft: Proprietary trading logic ended up in public training data
- Compliance Violation: GDPR data processed through non-compliant AI
Prevention: Think before you paste!
Before Using Any AI Tool:
β‘ Is this tool approved by IT?
β‘ Have I removed all secrets?
β‘ Is the data anonymized?
β‘ Do I understand retention?
β‘ Am I following policy?
Pro Tip: Create test data sets for AI interactions
Best Practices:
- Use placeholders: Replace real values with
<API_KEY_HERE> - Environment variables: Reference, don't embed
- Mock data: Create realistic but fake examples
- Sanitize first: Remove before sharing with AI
Example:
// DON'T: client.ApiKey = "sk-1234abcd...";
// DO: client.ApiKey = Environment.GetVariable("API_KEY");Automatic Filtering:
- Blocks common secret patterns
- Prevents generating real API keys
- Filters personally identifiable information
- Excludes files in .gitignore
Your Controls:
- Disable for specific files
- Exclude repositories
- Review telemetry settings
- Configure organization policies
GitHub Copilot Settings:
- Exclude specific file patterns
- Disable for sensitive repositories
- Require code review for AI suggestions
- Monitor usage through audit logs
Policy Example:
# .github/copilot-config.yml
disabled_for:
- "**/*secret*"
- "**/credentials/*"Which is safer to share with AI?
A) password = "SuperSecret123!"
B) password = Environment.GetVariable("DB_PASS")
C) // TODO: Add password from vault
Answer: B and C are safe, A exposes credentials
Remember: When in doubt, leave it out!
Your Task: "Build a Team Lunch Voting App with Agent Mode"
Basic Features:
- Suggest restaurants for lunch
- Vote on today's options
- See the winning restaurant
Time: 30 minutes Mode: Agent Mode only
Typical Issues:
- Wrong tech stack (jQuery? Angular?)
- No clear voting rules
- Missing winner calculation
- Poor database design
- No time constraints
Let's use proper context! Clone the following repository: https://github.com/OnCore-NV/GitHub-Copilot-Track
Context Files Provided:
/context-pack
βββ README.md # Overview & instructions
βββ requirements.md # Voting rules, deadlines
βββ tech-stack.md # React, Java Spring Boot
βββ api-spec.yaml # OpenAPI specification
βββ database-schema.sql # PostgreSQL schema
βββ ui-mockup.md # Design & components
βββ code-patterns.md # Examples & standards
Same Task: Build Team Lunch Voting App
Methods:
- Attach Files: Drag docs into chat
- @workspace: Reference project patterns
- Paste Examples: Show desired patterns
- Clear Constraints: Specify requirements
"Build Team Lunch Voting App following the attached
requirements.md, matching our code-patterns and using the ui-mock.md as a style guide."
Build Again with Context
Fresh Start: Clear your project folder, except for the context-pack
Time: 25 minutes (5 min less!)
Focus: Quality over speed
- Was there an impact with the added context?
- How was the quality of the generated codebase?
- What surprised you?
- What more context would you add?
Rank by Impact:
- Architecture patterns
- Code examples
- Testing requirements
- Style guide
- Integration docs
- UI specifications
Essential Context Elements:
- β Clear requirements/constraints
- β Existing patterns/examples
- β Integration points
- β Testing expectations
- β Performance requirements
Team Action: Document your patterns!
Daily Goals:
- Use inline completions for all coding
- Ask Copilot Chat 5 questions per day
- Generate one test suite
- Save 30 minutes on boilerplate
Skill Building:
- Master CRISP prompt framework
- Create 3 personal prompt templates
- Refactor legacy code with Chat
- Debug complex issue with AI help
Advanced Techniques:
- Use Agent Mode for multi-file feature
- Integrate AI into code reviews
- Build with context documentation
- Optimize model selection per task
Technical Mastery:
- ποΈ Architect with AI assistance
- π Build company prompt repository
- π§ Custom tooling & workflows
- π Continuous learning habit
Leadership & Impact:
- π Team AI champion
- π Best practices documentation
- π Mentor others regularly
- π Measure & share metrics
Individual Anti-Patterns:
- π« Accepting without review
- π« Fighting the suggestions
- π« Ignoring security warnings
- π« Copy-paste programming
Team Anti-Patterns:
- π« No shared standards
- π« Forcing adoption
- π« No knowledge sharing
π§ Core Concepts:
- LLMs predict, not memorize
- Context is everything
- CRISP framework for prompts
- Choose the right model
π‘ Power Features:
- Inline completions save time
- Chat understands your code
- Agent Mode handles complexity
- Real-time collaboration
π‘οΈ Security First:
- Never share secrets with AI
- Use enterprise tools only
- Review all suggestions
- Enable protective policies
π Remember:
- Start small, build habits
- Document what works
- Share with your team
- AI amplifies YOUR skills
Official Resources:
Your Input Matters!
[QR CODE PLACEHOLDER]
Your Instructors:
- Name: [Instructor Name]
- Email: [instructor@email.com]
Remember: if you have any questions, feel free to reach out!
π£οΈ Going Around the Room:
- How has your experience with GitHub Copilot been these last few weeks?
- What has worked well for you?
- What could have gone better?
- Are there things that are still unclear?
What is Copilot CLI?
- AI assistant directly in your terminal
- Two modes: Interactive & Programmatic
- Direct integration with GitHub.com
- Autonomous tool execution
Key Benefits:
- β‘ Stay in terminal workflow
- π Iterative task building
- π€ Complete complex operations
- π Access GitHub data seamlessly
Installation:
# Install GitHub CLI extension
gh extension install github/gh-copilot
# Start interactive session
copilot
# Or use programmatic mode
copilot -p "List my open PRs"Supported Platforms:
- Linux, macOS, Windows (WSL)
- Available with Pro, Business & Enterprise
# Make targeted changes
"Change H1 color to dark blue"
# Review changes
"Show last 5 CHANGELOG.md changes"
# Improve code quality
"Suggest improvements to content.js"Issue & PR Management:
- List your open pull requests
- Check assigned issues
- Create new issues automatically
- Manage PR lifecycle
Example Commands:
"List my open PRs"
"Start working on issue #1234
in a new branch"
"Merge all open PRs I created
in owner/repo"Advanced Workflows:
- Create pull requests with file changes
- Review PR code changes
- Set up GitHub Actions workflows
- Find specific workflow patterns
Security Features:
- Trusted directory prompts
- Tool approval system
- Scoped permissions
- Risk mitigation options
Security Considerations:
β οΈ Trusted Directories - Only launch from safe locations- π Tool Approval - Review commands before execution
- π‘οΈ Automatic Approval Options - Use carefully
- π Risk Mitigation - Consider restricted environments
Approval Options:
--allow-all-tools # All tools
--allow-tool 'shell' # Specific tools
--deny-tool 'rm' # Block dangerous commandsBest Practices:
- Never launch from home directory
- Review suggestions carefully
- Use in sandboxed environments
- Set up appropriate tool restrictions
- Understand security implications
Models:
- Default: Claude Sonnet 4
- Premium requests count applies
- Use
/modelto change models - Use
/feedbackfor improvements
What is SpecKit?
- Open-source toolkit for structured development
- Focuses on what before how
- Specifications become executable
- Intent-driven development approach
Core Philosophy:
- π Specifications define outcomes
- π Multi-step refinement process
- π― Rich specification creation
- π Predictable results over "vibe coding"
Installation:
# Persistent installation
uv tool install specify-cli \
--from git+https://github.com/github/spec-kit.git
# One-time usage
uvx --from git+https://github.com/github/spec-kit.git \
specify init <PROJECT_NAME>Supported AI Agents: β GitHub Copilot, Claude Code, Cursor, Windsurf, Gemini CLI, and more
1. Project Initialization
specify init my-project --ai copilot2. Establish Principles
/speckit.constitutionCreate governing principles for code quality, testing standards, UX consistency
3. Create Specifications
/speckit.specifyDescribe what you want to build, focus on outcomes
4. Technical Planning
/speckit.planDefine tech stack and architecture choices
5. Task Breakdown
/speckit.tasksGenerate actionable task lists
6. Implementation
/speckit.implementExecute all tasks according to the plan
Optional Quality Commands:
/speckit.clarify- Address underspecified areas/speckit.analyze- Cross-artifact consistency checking/speckit.checklist- Generate custom quality checklists
Development Phases:
- π’ 0-to-1 Development - Generate from scratch
- π Creative Exploration - Parallel implementations
- π οΈ Iterative Enhancement - Brownfield modernization
Key Benefits:
- Technology independence
- Enterprise constraint support
- User-centric development
- Creative & iterative processes
Prerequisites:
- Linux/macOS/Windows
- Supported AI coding agent
- uv package manager
- Python 3.11+, Git
What is MCP?
- Protocol for connecting AI to external data sources
- Extends GitHub Copilot coding agent capabilities
- JSON-based configuration
- Autonomous tool execution
Key Concepts:
- MCP Servers provide tools & data
- Repository-level configuration
- Secure environment variables
- Tool allowlisting for safety
Integration Benefits:
- π Connect to external APIs
- π Access databases and services
- π οΈ Custom tool integration
- π Seamless workflow enhancement
Supported Server Types:
- Local - Run commands locally
- HTTP - REST API endpoints
- SSE - Server-sent events
{
"mcpServers": {
"SERVER_NAME": {
"type": "local|http|sse",
"command": "...",
"args": [...],
"tools": ["tool1", "*"],
"env": {...}
}
}
}Sentry Integration:
{
"mcpServers": {
"sentry": {
"type": "local",
"command": "npx",
"args": ["@sentry/mcp-server@latest"],
"tools": ["get_issue_details"],
"env": {
"SENTRY_ACCESS_TOKEN": "COPILOT_MCP_SENTRY_TOKEN"
}
}
}
}Azure Services:
- Azure Cosmos DB access
- Azure Storage integration
- Seamless Azure DevOps connection
Notion Integration:
{
"mcpServers": {
"notionApi": {
"type": "local",
"command": "docker",
"args": ["run", "--rm", "-i", "mcp/notion"],
"env": {
"NOTION_API_KEY": "COPILOT_MCP_NOTION_KEY"
},
"tools": ["*"]
}
}
}Other Popular Servers:
- Cloudflare services
- Custom database connections
- API integrations
Security Considerations:
- π Environment Secrets - Use
COPILOT_MCP_*prefix - π― Tool Allowlisting - Specify exact tools, avoid
"*" β οΈ Autonomous Execution - No approval prompts- π‘οΈ Scope Limitation - Minimal necessary permissions
Secret Management:
# Repository Settings β Environments
# Create 'copilot' environment
# Add secrets with COPILOT_MCP_ prefixBest Practices:
- Start with read-only tools
- Use specific tool names
- Test thoroughly in safe environments
- Monitor usage and access patterns
- Regular security reviews
Validation Process:
- Create test issue
- Assign to Copilot
- Check MCP server startup logs
- Verify tool availability
- Test integration functionality
π Transform Daily Work:
- Save time on boilerplate
- Generate automated tests
- Instant code explanations
- Confident tech exploration
π‘ Level Up Skills:
- New languages in weeks
- Master frameworks faster
- AI-powered debugging
- Confident refactoring
π Amplify Impact:
- AI-collaborative architecture
- Build prompt libraries
- Enhanced team mentoring
- Faster quality delivery
π― Start Your Journey:
- Build daily habits
- Document successes
- Share learnings
- AI amplifies YOUR skills