From 862335801075f44d3225d757fe3da133f5bc9b11 Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Tue, 3 Feb 2026 10:28:41 +0200 Subject: [PATCH 01/22] First paragraph and author details --- _config.yml | 4 ++++ _posts/2026-02-03-my-post-title.md | 28 ++++++++++++++++++++++++++++ 2 files changed, 32 insertions(+) create mode 100644 _posts/2026-02-03-my-post-title.md diff --git a/_config.yml b/_config.yml index 07d6fcd7..31293c5a 100644 --- a/_config.yml +++ b/_config.yml @@ -474,3 +474,7 @@ authors: name: Kimmo Tomminen email: af60ec62d2bf274c0f5ab042b8eab3a8 url: https://www.linkedin.com/in/kimmo-tomminen-68327a223/ + solita-markump: + name: Markus Kumpulainen + email: 6675d0bcd8f5056dfc7b45d602a8ad60 + url: https://www.linkedin.com/in/markump/ diff --git a/_posts/2026-02-03-my-post-title.md b/_posts/2026-02-03-my-post-title.md new file mode 100644 index 00000000..ca6763e7 --- /dev/null +++ b/_posts/2026-02-03-my-post-title.md @@ -0,0 +1,28 @@ +--- +layout: post +title: Prompt Engineering 101 +author: solita-markump +excerpt: Starter Guide to Prompt Engineering - How to Get Good Results +tags: + - AI + - Generative AI + - GitHub Copilot + - Software Development + - Best Practices + - Productivity + - AI in Development +--- + +Let's be honest. Those who have embraced AI as part of their daily development work have noticed significant improvements in both speed and quality. So the question is no longer "Is AI useful for coding?" but rather "How do I get the most out of it?" + +Why do some developers see tremendous benefits while others end up with spaghetti code? I took on a challenge at the end of last year to only work by prompting, in order to learn the ins and outs of AI-assisted development. At my current level, I haven't written a single line of code by hand in a couple of months, yet my speed and quality have only improved. + +In this blog post, I'll share my personal learnings from this journey and try to extract the most important takeaways in this Prompt Engineering 101 starter guide. + +## Get The Tools (Github Copilot) + + + +### Subsection + +More content... From fd49c9b227fc688c2d306db483cd686400a84b5d Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Tue, 3 Feb 2026 14:14:17 +0200 Subject: [PATCH 02/22] Scond paragraph sheping up --- _posts/2026-02-03-my-post-title.md | 26 +++++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/_posts/2026-02-03-my-post-title.md b/_posts/2026-02-03-my-post-title.md index ca6763e7..c2b64bc4 100644 --- a/_posts/2026-02-03-my-post-title.md +++ b/_posts/2026-02-03-my-post-title.md @@ -19,7 +19,31 @@ Why do some developers see tremendous benefits while others end up with spaghett In this blog post, I'll share my personal learnings from this journey and try to extract the most important takeaways in this Prompt Engineering 101 starter guide. -## Get The Tools (Github Copilot) +## So What's the Problem? + +Most of us (myself included) first encountered AI capabilities through ChatGPT. For a long time, it served as a "Google replacement" and felt like a knowledgeable colleague you could ask for help and ideas. However, I quickly noticed AI's limitations, as I often received hallucinations and outdated information in return. Then came agents and code generation, where the impact of hallucinations and poor code quality actually became a hindrance rather than a benefit. + +The workflow typically went something like this: +1. There's a bug to fix +2. You debug manually and try to find the root cause +3. You exhaust all your own resources trying to solve it +4. You finally ask the AI + +The result is predictably poor and useless. + +When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity." Instead, you need to guide the AI like you would instruct a junior developer, or once you are skilled enough in prompting, a peer at the same level. Once I started giving the agent simpler tasks, I found it performed remarkably well! + +This is the most important lesson in prompt engineering: If you don't know how something should be done, the AI doesn't know either. AI is an eager guessing machine, and without clear guidance, it will simply guess, eagerly. + +Se pää juttu koko hommassa on tekoälyn contextin ymmärtäminen. + + +## Getting started + +What you need: +- Github Copilot license +- Github Copilot extension for the IDE of your choosing +- Put it on Agent From 63bfd9dcc4521911b9baecaea9cc48f6d1bba215 Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Wed, 4 Feb 2026 10:01:00 +0200 Subject: [PATCH 03/22] Context paragraph done --- _posts/2026-02-03-my-post-title.md | 32 +++++++++++++++++++++++++++--- 1 file changed, 29 insertions(+), 3 deletions(-) diff --git a/_posts/2026-02-03-my-post-title.md b/_posts/2026-02-03-my-post-title.md index c2b64bc4..5ec05fdc 100644 --- a/_posts/2026-02-03-my-post-title.md +++ b/_posts/2026-02-03-my-post-title.md @@ -33,10 +33,33 @@ The result is predictably poor and useless. When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity." Instead, you need to guide the AI like you would instruct a junior developer, or once you are skilled enough in prompting, a peer at the same level. Once I started giving the agent simpler tasks, I found it performed remarkably well! -This is the most important lesson in prompt engineering: If you don't know how something should be done, the AI doesn't know either. AI is an eager guessing machine, and without clear guidance, it will simply guess, eagerly. +This is the most important lesson in prompting: If you don't know how something should be done, the AI doesn't know either. AI is an eager guessing machine, and without clear guidance, it will simply guess, eagerly. -Se pää juttu koko hommassa on tekoälyn contextin ymmärtäminen. +## Context Is Everything +The key to effective prompting is understanding how AI context works. While the model is trained on vast amounts of data from across the internet, the context provided in your current chat session carries significantly more weight. I initially assumed that since JavaScript dominates AI training data, the model would perform poorly with other languages. This assumption was incorrect. Once you grasp how context influences output, you can achieve excellent results regardless of programming language or tech stack. + +Let me illustrate this with a practical example. + +**Example: Bad prompt** + +``` +Create a new API endpoint for user profiles. +``` + +**Example: Better prompt** + +``` +Create a new API endpoint for fetching user profiles. +Look at src/controllers/ProductController.ts and src/routes/products.ts +for reference on how we structure our endpoints in this project. +Look at src/views/UserProfile.tsx and see what placeholders we are using to deduce what fields should be returned. +Follow the same patterns for error handling and response formatting. +``` + +The second prompt is more effective because it provides the AI with concrete reference points from your actual codebase. Instead of leaving the AI to guess your project's conventions, you're explicitly pointing it to existing implementations. The AI can now read those files, understand your patterns for routing, controller structure, and error handling, and produce code that fits seamlessly into your project. The first prompt would likely result in generic code that follows common conventions from its training data, requiring significant refactoring to match your style. + +Later in this article, I'll show how even the bad example can yield good results by leveraging Planning mode. ## Getting started @@ -47,6 +70,9 @@ What you need: -### Subsection +## Konkreettiset More content... + + +- Multiagent setups, skills, frameworks etc etc etc -> just focus on understanding context \ No newline at end of file From 5227a26e162c94a8444530752d3631d6ad010966 Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Wed, 4 Feb 2026 11:31:15 +0200 Subject: [PATCH 04/22] Planning mode --- _posts/2026-02-03-my-post-title.md | 71 ++++++++++++++++++++---------- 1 file changed, 47 insertions(+), 24 deletions(-) diff --git a/_posts/2026-02-03-my-post-title.md b/_posts/2026-02-03-my-post-title.md index 5ec05fdc..293ab623 100644 --- a/_posts/2026-02-03-my-post-title.md +++ b/_posts/2026-02-03-my-post-title.md @@ -23,8 +23,8 @@ In this blog post, I'll share my personal learnings from this journey and try to Most of us (myself included) first encountered AI capabilities through ChatGPT. For a long time, it served as a "Google replacement" and felt like a knowledgeable colleague you could ask for help and ideas. However, I quickly noticed AI's limitations, as I often received hallucinations and outdated information in return. Then came agents and code generation, where the impact of hallucinations and poor code quality actually became a hindrance rather than a benefit. -The workflow typically went something like this: -1. There's a bug to fix +A common mistake when starting out is asking the AI to solve problems that are too complex. The workflow typically goes something like this: +1. There's an issue to fix 2. You debug manually and try to find the root cause 3. You exhaust all your own resources trying to solve it 4. You finally ask the AI @@ -41,38 +41,61 @@ The key to effective prompting is understanding how AI context works. While the Let me illustrate this with a practical example. -**Example: Bad prompt** +**Bad prompt:** +> Create a new API endpoint for user profiles. -``` -Create a new API endpoint for user profiles. -``` +**Better prompt:** +> Create a new API endpoint for fetching user profiles. +> Look at `src/controllers/ProductController.ts` and `src/routes/products.ts` for reference on how we structure our endpoints in this project. +> Look at `src/views/UserProfile.tsx` and see what placeholders we are using to deduce what fields should be returned. +> Follow the same patterns for error handling and response formatting. -**Example: Better prompt** +The second prompt is more effective because it provides the AI with concrete reference points from your actual codebase. Instead of leaving the AI to guess your project's conventions, you're explicitly pointing it to existing implementations. The AI can now read those files, understand your patterns for routing, controller structure, and error handling, and produce code that fits seamlessly into your project. The first prompt would likely result in generic code that follows common conventions from its training data, requiring significant refactoring to match your style. -``` -Create a new API endpoint for fetching user profiles. -Look at src/controllers/ProductController.ts and src/routes/products.ts -for reference on how we structure our endpoints in this project. -Look at src/views/UserProfile.tsx and see what placeholders we are using to deduce what fields should be returned. -Follow the same patterns for error handling and response formatting. -``` +But what if I told you that even the bad prompt can yield excellent results? Enter Planning mode. -The second prompt is more effective because it provides the AI with concrete reference points from your actual codebase. Instead of leaving the AI to guess your project's conventions, you're explicitly pointing it to existing implementations. The AI can now read those files, understand your patterns for routing, controller structure, and error handling, and produce code that fits seamlessly into your project. The first prompt would likely result in generic code that follows common conventions from its training data, requiring significant refactoring to match your style. +## Just Use Planning Mode + +I'm a huge advocate of Planning mode. In Agent mode, Copilot immediately starts executing. And since AI is fundamentally a prediction machine, it will confidently keep going even when it's heading in the wrong direction. -Later in this article, I'll show how even the bad example can yield good results by leveraging Planning mode. +Planning mode, on the other hand, forces the AI to stop, think and explore the codebase first. It creates a step-by-step plan of what it *intends* to do and asks for clarifying questions. This simple change makes a massive difference. You can catch misunderstandings early, redirect the approach, or refine your requirements before any code is written. And most importantly, it automatically loads exactly the files and references it needs into context so it can complete the task with good results without hallusinations. -## Getting started +Let's see how the same "bad prompt" can outperform our carefully crafted "better prompt" when used with Planning mode. -What you need: -- Github Copilot license -- Github Copilot extension for the IDE of your choosing -- Put it on Agent +**You:** +> Create a new API endpoint for user profiles. +**Planning Agent:** +> I'll help you create a new API endpoint for user profiles. Let me first understand your project structure and conventions. +> +> **Gathering context...** +> - Reading `src/controllers/` to understand your controller patterns +> - Reading `src/routes/` to see how routes are structured +> - Reading `src/models/User.ts` to understand the User entity +> - Reading `src/views/UserProfile.tsx` to see what fields the frontend expects +> +> **⚠️ I noticed something:** +> `ProductController.ts` uses Redis caching with a 5-minute TTL for GET requests. Since `UserProfile.tsx` fetches this data frequently, I'll apply the same caching strategy to improve performance. +> +> **Here's my plan:** +> 1. Create `src/controllers/UserController.ts` following the pattern in `ProductController.ts` +> 2. Add route `/api/users/:id` in `src/routes/users.ts` +> 3. Return fields: `id`, `name`, `email`, `avatar`, `createdAt` (based on `UserProfile.tsx` usage) +> 4. Implement Redis caching with 5-minute TTL, same as `ProductController.ts` +> 5. Use the same `ApiResponse` wrapper and error handling as other controllers +> 6. Add input validation for the user ID parameter +> +> **Questions before I proceed:** +> - Should this endpoint require authentication? +> - Do you want pagination for listing multiple users? +> +> Does this plan look correct? +Now we are cooking! You can see exactly what the agent intends to do before it writes a single line of code. You can answer its questions, refine the plan, and polish it until it's exactly right. In my experience, the agent often catches details I overlooked (like the caching pattern example above), which would have caused inconsistencies later. Now it's just a matter of should you one-shot the whole plan or execute it step by step. -## Konkreettiset +## How not to get overwhelmed -More content... +The world of agentic coding is evolving way too fast for anyone to stay on top of everything that's new. -- Multiagent setups, skills, frameworks etc etc etc -> just focus on understanding context \ No newline at end of file +- Multiagent setups, skills, Ralph Loops etc etc etc -> just focus on understanding context \ No newline at end of file From 25c75b772e71fc324eb9a2908ef058e8a9569a15 Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Wed, 4 Feb 2026 13:19:10 +0200 Subject: [PATCH 05/22] How not to get overwhelmed --- _posts/2026-02-03-my-post-title.md | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/_posts/2026-02-03-my-post-title.md b/_posts/2026-02-03-my-post-title.md index 293ab623..ddc70991 100644 --- a/_posts/2026-02-03-my-post-title.md +++ b/_posts/2026-02-03-my-post-title.md @@ -91,11 +91,20 @@ Let's see how the same "bad prompt" can outperform our carefully crafted "better > > Does this plan look correct? -Now we are cooking! You can see exactly what the agent intends to do before it writes a single line of code. You can answer its questions, refine the plan, and polish it until it's exactly right. In my experience, the agent often catches details I overlooked (like the caching pattern example above), which would have caused inconsistencies later. Now it's just a matter of should you one-shot the whole plan or execute it step by step. +Now we are cooking! You can see exactly what the agent intends to do before it writes a single line of code. You can answer its questions, refine the plan, and polish it until it's exactly right. In my experience, the agent often catches details I overlooked (like the caching pattern example above), which would have caused inconsistencies later. + +## Prompting is a skill + +This is what "coding" looks like in my workflow nowadays: going back and forth with the AI to refine the plan until it's right. I get to focus on the big picture, how the pieces fit together, and in the end I design better features, improve the codebase through refactoring, and save time because the code writing is automated. + +But getting here wasn't instant. At first, I felt like an idiot when nothing worked. After my initial attempts, I caught myself thinking "I can code faster by hand than fixing the AI's mistakes." It took about two weeks to break even with manual coding, and another few weeks before the new approach truly clicked. + +Prompting is a skill just like coding. And like any new skill, you have to accept a small ego hit to make progress. The hardest part isn't learning new syntax or tools. It's letting go of the urge to dive straight into implementation. You have to trust the process: describe the problem clearly, let the AI explore, refine the plan together, and only then execute. Once you've done a day's worth of work in minutes without the AI making a single mistake, you'll never want to go back. ## How not to get overwhelmed -The world of agentic coding is evolving way too fast for anyone to stay on top of everything that's new. +The world of agentic coding is evolving way too fast for anyone to stay on top of everything. New concepts emerge constantly: [MCP (Model Context Protocol)](https://modelcontextprotocol.io/introduction) lets agents connect to databases, APIs, and external tools. [Agent Skills](https://docs.github.com/en/copilot/concepts/agents/about-agent-skills) give Copilot specialized capabilities for specific tasks. Multi-agent orchestrators like [Gas Town](https://github.com/steveyegge/gastown) let you coordinate 20-30 Claude Code agents working in parallel with persistent work tracking. And [custom agents](https://code.visualstudio.com/docs/copilot/customization/custom-agents) let you create specialized assistants tailored to your workflow. +It can feel overwhelming. If I changed my workflow every time a new tool came up, I wouldn't get any work done. And here's the thing: all of these features are ultimately just different ways to feed better instructions to the model. -- Multiagent setups, skills, Ralph Loops etc etc etc -> just focus on understanding context \ No newline at end of file +My advice? Don't chase every new feature. Focus on mastering the fundamentals: understanding context, writing clear prompts, and using Planning mode. Once you've nailed those, the advanced features will make much more sense. \ No newline at end of file From bc623486a52ebd4fb6d10426bfc762b81d4fea75 Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Wed, 4 Feb 2026 13:48:08 +0200 Subject: [PATCH 06/22] basicly done --- _posts/2026-02-03-my-post-title.md | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/_posts/2026-02-03-my-post-title.md b/_posts/2026-02-03-my-post-title.md index ddc70991..ad0aaa2d 100644 --- a/_posts/2026-02-03-my-post-title.md +++ b/_posts/2026-02-03-my-post-title.md @@ -107,4 +107,21 @@ The world of agentic coding is evolving way too fast for anyone to stay on top o It can feel overwhelming. If I changed my workflow every time a new tool came up, I wouldn't get any work done. And here's the thing: all of these features are ultimately just different ways to feed better instructions to the model. -My advice? Don't chase every new feature. Focus on mastering the fundamentals: understanding context, writing clear prompts, and using Planning mode. Once you've nailed those, the advanced features will make much more sense. \ No newline at end of file +My advice? Don't chase every new feature. Focus on mastering the fundamentals: understanding context, writing clear prompts, and using Planning mode. Once you've nailed those, the advanced features will make much more sense. + +## Getting started + +This is what you need to get going: + +1. Get a GitHub Copilot license from [IT Services](https://it.services.solita.fi/support/catalog/items/134) +2. Install the [GitHub Copilot Chat](https://marketplace.visualstudio.com/items?itemName=GitHub.copilot-chat) extension in VS Code +3. Log in with your Solita GitHub account + +That's it. You should now have the Copilot chat panel on the right side of VS Code. You can switch between Agent and Plan modes using the dropdown, and choose which model to use. In my opinion, the best coding model right now is Claude Opus 4.5. + +## Conclusion +AI-assisted development isn't magic, and it's not going to replace you. It's a tool that enables you to focus on solving the actual problem and helps you save time by automating the coding part. + +Start with Planning mode. Give the AI context. Break big problems into smaller ones. Accept that there's a learning curve and your performance takes a hit in the beginning. And when it finally clicks, you realize that the only limiting factor is your imagination. + +Now go try it. Try the Plan mode and see what happens, experiment. You can always revert and try again. \ No newline at end of file From 8f4039afd1d8dfc0f3fe3398d8af2f39cd24b30e Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Wed, 4 Feb 2026 14:00:18 +0200 Subject: [PATCH 07/22] Backend is .NET in examples --- _posts/2026-02-03-my-post-title.md | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/_posts/2026-02-03-my-post-title.md b/_posts/2026-02-03-my-post-title.md index ad0aaa2d..fbac4a1f 100644 --- a/_posts/2026-02-03-my-post-title.md +++ b/_posts/2026-02-03-my-post-title.md @@ -15,7 +15,7 @@ tags: Let's be honest. Those who have embraced AI as part of their daily development work have noticed significant improvements in both speed and quality. So the question is no longer "Is AI useful for coding?" but rather "How do I get the most out of it?" -Why do some developers see tremendous benefits while others end up with spaghetti code? I took on a challenge at the end of last year to only work by prompting, in order to learn the ins and outs of AI-assisted development. At my current level, I haven't written a single line of code by hand in a couple of months, yet my speed and quality have only improved. +Why do some developers see tremendous benefits while others end up with spaghetti code? I took on a challenge at the end of last year to only work by prompting, in order to learn the ins and outs of AI-assisted development. These days, I rarely type code by hand - not because I can't, but because prompting has become faster. I still read and understand every line the AI produces. The typing is just automated. In this blog post, I'll share my personal learnings from this journey and try to extract the most important takeaways in this Prompt Engineering 101 starter guide. @@ -46,8 +46,8 @@ Let me illustrate this with a practical example. **Better prompt:** > Create a new API endpoint for fetching user profiles. -> Look at `src/controllers/ProductController.ts` and `src/routes/products.ts` for reference on how we structure our endpoints in this project. -> Look at `src/views/UserProfile.tsx` and see what placeholders we are using to deduce what fields should be returned. +> Look at `Controllers/ProductController.cs` for reference on how we structure our endpoints and routing attributes. +> Look at `ClientApp/src/views/UserProfile.tsx` and see what placeholders we are using to deduce what fields should be returned. > Follow the same patterns for error handling and response formatting. The second prompt is more effective because it provides the AI with concrete reference points from your actual codebase. Instead of leaving the AI to guess your project's conventions, you're explicitly pointing it to existing implementations. The AI can now read those files, understand your patterns for routing, controller structure, and error handling, and produce code that fits seamlessly into your project. The first prompt would likely result in generic code that follows common conventions from its training data, requiring significant refactoring to match your style. @@ -69,21 +69,21 @@ Let's see how the same "bad prompt" can outperform our carefully crafted "better > I'll help you create a new API endpoint for user profiles. Let me first understand your project structure and conventions. > > **Gathering context...** -> - Reading `src/controllers/` to understand your controller patterns -> - Reading `src/routes/` to see how routes are structured -> - Reading `src/models/User.ts` to understand the User entity -> - Reading `src/views/UserProfile.tsx` to see what fields the frontend expects +> - Reading `Controllers/` to understand your controller patterns and routing attributes +> - Reading `Models/User.cs` to understand the User entity +> - Reading `Services/` to see how business logic is structured +> - Reading `ClientApp/src/views/UserProfile.tsx` to see what fields the frontend expects > > **⚠️ I noticed something:** -> `ProductController.ts` uses Redis caching with a 5-minute TTL for GET requests. Since `UserProfile.tsx` fetches this data frequently, I'll apply the same caching strategy to improve performance. +> `ProductController.cs` uses `IDistributedCache` with a 5-minute TTL for GET requests. Since `UserProfile.tsx` fetches this data frequently, I'll apply the same caching strategy to improve performance. > > **Here's my plan:** -> 1. Create `src/controllers/UserController.ts` following the pattern in `ProductController.ts` -> 2. Add route `/api/users/:id` in `src/routes/users.ts` -> 3. Return fields: `id`, `name`, `email`, `avatar`, `createdAt` (based on `UserProfile.tsx` usage) -> 4. Implement Redis caching with 5-minute TTL, same as `ProductController.ts` -> 5. Use the same `ApiResponse` wrapper and error handling as other controllers -> 6. Add input validation for the user ID parameter +> 1. Create `Controllers/UserController.cs` following the pattern in `ProductController.cs` +> 2. Add route `/api/users/{id}` using `[Route]` and `[HttpGet]` attributes +> 3. Create `DTOs/UserProfileDto.cs` with fields: `Id`, `Name`, `Email`, `Avatar`, `CreatedAt` (based on `UserProfile.tsx` usage) +> 4. Implement Redis caching with 5-minute TTL using `IDistributedCache`, same as `ProductController.cs` +> 5. Use the same `ApiResponse` wrapper and error handling as other controllers +> 6. Add input validation using FluentValidation > > **Questions before I proceed:** > - Should this endpoint require authentication? From 672bcd99588ae640a27afa9c48cc4a3c7501810d Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Wed, 4 Feb 2026 14:30:05 +0200 Subject: [PATCH 08/22] Finalized --- ...y-post-title.md => 2026-02-03-prompt-engineering-101.md} | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) rename _posts/{2026-02-03-my-post-title.md => 2026-02-03-prompt-engineering-101.md} (92%) diff --git a/_posts/2026-02-03-my-post-title.md b/_posts/2026-02-03-prompt-engineering-101.md similarity index 92% rename from _posts/2026-02-03-my-post-title.md rename to _posts/2026-02-03-prompt-engineering-101.md index fbac4a1f..63f8e1d8 100644 --- a/_posts/2026-02-03-my-post-title.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -93,6 +93,10 @@ Let's see how the same "bad prompt" can outperform our carefully crafted "better Now we are cooking! You can see exactly what the agent intends to do before it writes a single line of code. You can answer its questions, refine the plan, and polish it until it's exactly right. In my experience, the agent often catches details I overlooked (like the caching pattern example above), which would have caused inconsistencies later. +Planning mode doesn't magically eliminate hallucinations, but it *does* shift the workflow in your favor: it front-loads context gathering, makes assumptions explicit, and gives you a review gate before any code touches your repo. + +"But what about the hard stuff - race conditions, complex state machines, security edge cases?" These are exactly where good prompting matters most. The AI struggles when you're vague, but if you can enumerate the edge cases, describe the state transitions, or specify the security requirements, it handles them remarkably well. Of course, this assumes you actually understand the problem. My point is: If you know how to solve the problem, AI can too. It's just a matter of breaking it down into smaller pieces the AI can handle. + ## Prompting is a skill This is what "coding" looks like in my workflow nowadays: going back and forth with the AI to refine the plan until it's right. I get to focus on the big picture, how the pieces fit together, and in the end I design better features, improve the codebase through refactoring, and save time because the code writing is automated. @@ -122,6 +126,6 @@ That's it. You should now have the Copilot chat panel on the right side of VS Co ## Conclusion AI-assisted development isn't magic, and it's not going to replace you. It's a tool that enables you to focus on solving the actual problem and helps you save time by automating the coding part. -Start with Planning mode. Give the AI context. Break big problems into smaller ones. Accept that there's a learning curve and your performance takes a hit in the beginning. And when it finally clicks, you realize that the only limiting factor is your imagination. +Start with Planning mode. Give the AI context. Break big problems into smaller ones. Accept that there's a learning curve and your performance takes a hit in the beginning. And when it finally clicks, the bottleneck moves from typing to thinking. Now go try it. Try the Plan mode and see what happens, experiment. You can always revert and try again. \ No newline at end of file From 9a065b4970a82b3f8265ba4bda0978dbfc0e9190 Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Wed, 4 Feb 2026 15:54:41 +0200 Subject: [PATCH 09/22] Getting to the point faster --- _posts/2026-02-03-prompt-engineering-101.md | 34 ++++++++++++--------- 1 file changed, 19 insertions(+), 15 deletions(-) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 63f8e1d8..20adc17b 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -2,7 +2,7 @@ layout: post title: Prompt Engineering 101 author: solita-markump -excerpt: Starter Guide to Prompt Engineering - How to Get Good Results +excerpt: Starter Guide to Prompt Engineering: How to Get Good Results tags: - AI - Generative AI @@ -15,15 +15,13 @@ tags: Let's be honest. Those who have embraced AI as part of their daily development work have noticed significant improvements in both speed and quality. So the question is no longer "Is AI useful for coding?" but rather "How do I get the most out of it?" -Why do some developers see tremendous benefits while others end up with spaghetti code? I took on a challenge at the end of last year to only work by prompting, in order to learn the ins and outs of AI-assisted development. These days, I rarely type code by hand - not because I can't, but because prompting has become faster. I still read and understand every line the AI produces. The typing is just automated. +Why do some developers see tremendous benefits while others end up with spaghetti code? I took on a challenge at the end of last year to only work by prompting, in order to learn the ins and outs of AI-assisted development. In this blog post, I'll share my personal learnings from this journey and try to extract the most important takeaways in this Prompt Engineering 101 starter guide. ## So What's the Problem? -Most of us (myself included) first encountered AI capabilities through ChatGPT. For a long time, it served as a "Google replacement" and felt like a knowledgeable colleague you could ask for help and ideas. However, I quickly noticed AI's limitations, as I often received hallucinations and outdated information in return. Then came agents and code generation, where the impact of hallucinations and poor code quality actually became a hindrance rather than a benefit. - -A common mistake when starting out is asking the AI to solve problems that are too complex. The workflow typically goes something like this: +A common mistake when starting out is asking the AI to solve problems that are too complex. I made this mistake myself and have observed many others doing the same. The workflow typically goes something like this: 1. There's an issue to fix 2. You debug manually and try to find the root cause 3. You exhaust all your own resources trying to solve it @@ -33,7 +31,9 @@ The result is predictably poor and useless. When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity." Instead, you need to guide the AI like you would instruct a junior developer, or once you are skilled enough in prompting, a peer at the same level. Once I started giving the agent simpler tasks, I found it performed remarkably well! -This is the most important lesson in prompting: If you don't know how something should be done, the AI doesn't know either. AI is an eager guessing machine, and without clear guidance, it will simply guess, eagerly. +**This is the most important lesson in prompting:** If you don't know how something should be done, the AI doesn't know either. + +AI is fundamentally a guessing machine. Without clear guidance, it will confidently guess and keep guessing. The quality of your output is directly tied to the clarity of your instructions. You can't outsource understanding to the AI. ## Context Is Everything @@ -44,13 +44,15 @@ Let me illustrate this with a practical example. **Bad prompt:** > Create a new API endpoint for user profiles. +This prompt would likely result in generic code that follows common conventions from its training data, requiring significant refactoring to match your style. + **Better prompt:** > Create a new API endpoint for fetching user profiles. > Look at `Controllers/ProductController.cs` for reference on how we structure our endpoints and routing attributes. > Look at `ClientApp/src/views/UserProfile.tsx` and see what placeholders we are using to deduce what fields should be returned. > Follow the same patterns for error handling and response formatting. -The second prompt is more effective because it provides the AI with concrete reference points from your actual codebase. Instead of leaving the AI to guess your project's conventions, you're explicitly pointing it to existing implementations. The AI can now read those files, understand your patterns for routing, controller structure, and error handling, and produce code that fits seamlessly into your project. The first prompt would likely result in generic code that follows common conventions from its training data, requiring significant refactoring to match your style. +The second prompt is more effective because it provides the AI with concrete reference points from your actual codebase. Instead of leaving the AI to guess your project's conventions, you're explicitly pointing it to existing implementations. The AI can now read those files, understand your patterns for routing, controller structure, and error handling, and produce code that fits seamlessly into your project. But what if I told you that even the bad prompt can yield excellent results? Enter Planning mode. @@ -58,7 +60,7 @@ But what if I told you that even the bad prompt can yield excellent results? Ent I'm a huge advocate of Planning mode. In Agent mode, Copilot immediately starts executing. And since AI is fundamentally a prediction machine, it will confidently keep going even when it's heading in the wrong direction. -Planning mode, on the other hand, forces the AI to stop, think and explore the codebase first. It creates a step-by-step plan of what it *intends* to do and asks for clarifying questions. This simple change makes a massive difference. You can catch misunderstandings early, redirect the approach, or refine your requirements before any code is written. And most importantly, it automatically loads exactly the files and references it needs into context so it can complete the task with good results without hallusinations. +Planning mode, on the other hand, forces the AI to stop, think and *explore the codebase* first. It creates a step-by-step plan of what it *intends* to do and asks for clarifying questions. This simple change makes a massive difference. You can catch misunderstandings early, redirect the approach, or refine your requirements before any code is written. And most importantly, it automatically loads exactly the files and references it needs into context so it can complete the task with good results without going off the rails. Let's see how the same "bad prompt" can outperform our carefully crafted "better prompt" when used with Planning mode. @@ -93,19 +95,21 @@ Let's see how the same "bad prompt" can outperform our carefully crafted "better Now we are cooking! You can see exactly what the agent intends to do before it writes a single line of code. You can answer its questions, refine the plan, and polish it until it's exactly right. In my experience, the agent often catches details I overlooked (like the caching pattern example above), which would have caused inconsistencies later. -Planning mode doesn't magically eliminate hallucinations, but it *does* shift the workflow in your favor: it front-loads context gathering, makes assumptions explicit, and gives you a review gate before any code touches your repo. +"But what about the hard stuff like race conditions, complex state machines, and security edge cases?" These are exactly where good prompting matters most. The AI struggles when you're vague, but if you can enumerate the edge cases, describe the state transitions, or specify the security requirements, it handles them remarkably well. Of course, this assumes you actually understand the problem. -"But what about the hard stuff - race conditions, complex state machines, security edge cases?" These are exactly where good prompting matters most. The AI struggles when you're vague, but if you can enumerate the edge cases, describe the state transitions, or specify the security requirements, it handles them remarkably well. Of course, this assumes you actually understand the problem. My point is: If you know how to solve the problem, AI can too. It's just a matter of breaking it down into smaller pieces the AI can handle. +Remember the key lesson from earlier: if you don't know how to solve the problem, the AI doesn't know either. The opposite is true as well. If you understand the solution, the AI can execute it. The challenge is breaking down that complexity into clear, actionable steps. -## Prompting is a skill +## Prompting Is a Skill -This is what "coding" looks like in my workflow nowadays: going back and forth with the AI to refine the plan until it's right. I get to focus on the big picture, how the pieces fit together, and in the end I design better features, improve the codebase through refactoring, and save time because the code writing is automated. +This is what "coding" looks like in my workflow nowadays: going back and forth with the AI to refine the plan until it's right. I get to focus on the big picture and how the pieces fit together. In the end, I design better features, improve the codebase through refactoring, and save time because the code writing is automated. But getting here wasn't instant. At first, I felt like an idiot when nothing worked. After my initial attempts, I caught myself thinking "I can code faster by hand than fixing the AI's mistakes." It took about two weeks to break even with manual coding, and another few weeks before the new approach truly clicked. -Prompting is a skill just like coding. And like any new skill, you have to accept a small ego hit to make progress. The hardest part isn't learning new syntax or tools. It's letting go of the urge to dive straight into implementation. You have to trust the process: describe the problem clearly, let the AI explore, refine the plan together, and only then execute. Once you've done a day's worth of work in minutes without the AI making a single mistake, you'll never want to go back. +Prompting is a skill just like coding. And like any new skill, you have to accept a small ego hit to make progress. The hardest part is getting started. You don't yet know how to talk to the agent. Your prompts will fail. You'll redo things. A lot. But with each mistake, you learn what works and what doesn't. + +**The point comes eventually when you realize you've done a day's worth of work in minutes without the AI making a single mistake.** After that, there's no going back. -## How not to get overwhelmed +## How Not to Get Overwhelmed The world of agentic coding is evolving way too fast for anyone to stay on top of everything. New concepts emerge constantly: [MCP (Model Context Protocol)](https://modelcontextprotocol.io/introduction) lets agents connect to databases, APIs, and external tools. [Agent Skills](https://docs.github.com/en/copilot/concepts/agents/about-agent-skills) give Copilot specialized capabilities for specific tasks. Multi-agent orchestrators like [Gas Town](https://github.com/steveyegge/gastown) let you coordinate 20-30 Claude Code agents working in parallel with persistent work tracking. And [custom agents](https://code.visualstudio.com/docs/copilot/customization/custom-agents) let you create specialized assistants tailored to your workflow. @@ -113,7 +117,7 @@ It can feel overwhelming. If I changed my workflow every time a new tool came up My advice? Don't chase every new feature. Focus on mastering the fundamentals: understanding context, writing clear prompts, and using Planning mode. Once you've nailed those, the advanced features will make much more sense. -## Getting started +## Getting Started This is what you need to get going: From 73f011ae114205c6fe9bbdeac93de4ff244656d8 Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Thu, 5 Feb 2026 10:24:22 +0200 Subject: [PATCH 10/22] Fine tuned based on feedback --- _posts/2026-02-03-prompt-engineering-101.md | 35 ++++++++++++--------- 1 file changed, 21 insertions(+), 14 deletions(-) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 20adc17b..5c5c42d6 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -2,7 +2,7 @@ layout: post title: Prompt Engineering 101 author: solita-markump -excerpt: Starter Guide to Prompt Engineering: How to Get Good Results +excerpt: "Starter Guide to Prompt Engineering: How to Get Good Results" tags: - AI - Generative AI @@ -17,7 +17,7 @@ Let's be honest. Those who have embraced AI as part of their daily development w Why do some developers see tremendous benefits while others end up with spaghetti code? I took on a challenge at the end of last year to only work by prompting, in order to learn the ins and outs of AI-assisted development. -In this blog post, I'll share my personal learnings from this journey and try to extract the most important takeaways in this Prompt Engineering 101 starter guide. +In this post, I'll share the key lessons from that journey, and hopefully inspire you to give it (another) try. ## So What's the Problem? @@ -29,11 +29,13 @@ A common mistake when starting out is asking the AI to solve problems that are t The result is predictably poor and useless. -When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity." Instead, you need to guide the AI like you would instruct a junior developer, or once you are skilled enough in prompting, a peer at the same level. Once I started giving the agent simpler tasks, I found it performed remarkably well! +When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity" or a sparring partner. Instead, you need to guide the AI like you would instruct a junior developer, or once you are skilled enough in prompting, a peer at the same level. Once I started giving the agent simpler tasks, I found it performed remarkably well! -**This is the most important lesson in prompting:** If you don't know how something should be done, the AI doesn't know either. +That mental shift boils down to one rule. Tell, don't ask. -AI is fundamentally a guessing machine. Without clear guidance, it will confidently guess and keep guessing. The quality of your output is directly tied to the clarity of your instructions. You can't outsource understanding to the AI. +This is the most important lesson in prompting: **If you don't know how something should be done, the AI doesn't know either.** + +AI is fundamentally a guessing machine. Without clear guidance, it will confidently guess and keep guessing. The quality of your output is directly tied to the clarity of your instructions. ## Context Is Everything @@ -44,7 +46,7 @@ Let me illustrate this with a practical example. **Bad prompt:** > Create a new API endpoint for user profiles. -This prompt would likely result in generic code that follows common conventions from its training data, requiring significant refactoring to match your style. +This prompt would likely result in unexpected changes across your codebase and generic code based on common conventions from training data. **Better prompt:** > Create a new API endpoint for fetching user profiles. @@ -77,7 +79,7 @@ Let's see how the same "bad prompt" can outperform our carefully crafted "better > - Reading `ClientApp/src/views/UserProfile.tsx` to see what fields the frontend expects > > **⚠️ I noticed something:** -> `ProductController.cs` uses `IDistributedCache` with a 5-minute TTL for GET requests. Since `UserProfile.tsx` fetches this data frequently, I'll apply the same caching strategy to improve performance. +> Some of your GET endpoints use `IDistributedCache` with a 5-minute TTL. Since user profiles are read frequently but rarely updated, I'll apply the same caching strategy here. > > **Here's my plan:** > 1. Create `Controllers/UserController.cs` following the pattern in `ProductController.cs` @@ -95,9 +97,16 @@ Let's see how the same "bad prompt" can outperform our carefully crafted "better Now we are cooking! You can see exactly what the agent intends to do before it writes a single line of code. You can answer its questions, refine the plan, and polish it until it's exactly right. In my experience, the agent often catches details I overlooked (like the caching pattern example above), which would have caused inconsistencies later. -"But what about the hard stuff like race conditions, complex state machines, and security edge cases?" These are exactly where good prompting matters most. The AI struggles when you're vague, but if you can enumerate the edge cases, describe the state transitions, or specify the security requirements, it handles them remarkably well. Of course, this assumes you actually understand the problem. +Here's the magic of context. Once the agent has explored your codebase and built a plan, you don't need to start from scratch for related tasks. The relevant files are already loaded, and the agent remembers what it just did. Your next prompt can be much simpler: + +**You:** +> Update `UserProfile.tsx` to call the new endpoint and replace the placeholders with real data. + +"What about the hard stuff like race conditions, complex state machines, and security edge cases?" These are exactly where good prompting matters most. The AI struggles when you're vague, but if you can enumerate the edge cases, describe the state transitions, or specify the security requirements, it handles them remarkably well. Of course, this assumes you actually understand the problem. + +But wait... Earlier I said the problem was giving AI too complex problems to solve. Well, complexity itself isn't the enemy. Unclear complexity is. AI can solve any complex task but the challenge is breaking down that complexity into clear, actionable steps. -Remember the key lesson from earlier: if you don't know how to solve the problem, the AI doesn't know either. The opposite is true as well. If you understand the solution, the AI can execute it. The challenge is breaking down that complexity into clear, actionable steps. +Once you get the hang of it, you start developing an intuition for when you can give the AI a broad goal versus when you need to spell out every step. ## Prompting Is a Skill @@ -105,7 +114,7 @@ This is what "coding" looks like in my workflow nowadays: going back and forth w But getting here wasn't instant. At first, I felt like an idiot when nothing worked. After my initial attempts, I caught myself thinking "I can code faster by hand than fixing the AI's mistakes." It took about two weeks to break even with manual coding, and another few weeks before the new approach truly clicked. -Prompting is a skill just like coding. And like any new skill, you have to accept a small ego hit to make progress. The hardest part is getting started. You don't yet know how to talk to the agent. Your prompts will fail. You'll redo things. A lot. But with each mistake, you learn what works and what doesn't. +Prompting is a skill just like any other. You have to accept a small ego hit and feel dumb for a bit to make progress. The hardest part is getting started. You don't yet know how to talk to the agent. Your prompts will fail. You'll redo things. A lot. But with each mistake, you learn what works and what doesn't. **The point comes eventually when you realize you've done a day's worth of work in minutes without the AI making a single mistake.** After that, there's no going back. @@ -125,11 +134,9 @@ This is what you need to get going: 2. Install the [GitHub Copilot Chat](https://marketplace.visualstudio.com/items?itemName=GitHub.copilot-chat) extension in VS Code 3. Log in with your Solita GitHub account -That's it. You should now have the Copilot chat panel on the right side of VS Code. You can switch between Agent and Plan modes using the dropdown, and choose which model to use. In my opinion, the best coding model right now is Claude Opus 4.5. +That's it. You should now have the Copilot chat panel on the right side of VS Code. You can switch between Agent and Plan modes using the dropdown, and choose which model to use. In my opinion, the best coding model right now is **Claude Opus 4.5.** ## Conclusion AI-assisted development isn't magic, and it's not going to replace you. It's a tool that enables you to focus on solving the actual problem and helps you save time by automating the coding part. -Start with Planning mode. Give the AI context. Break big problems into smaller ones. Accept that there's a learning curve and your performance takes a hit in the beginning. And when it finally clicks, the bottleneck moves from typing to thinking. - -Now go try it. Try the Plan mode and see what happens, experiment. You can always revert and try again. \ No newline at end of file +Start with Planning mode. Just talk to the agent. Break big problems into smaller ones together with the agent. Accept that there's a learning curve and your performance takes a hit in the beginning. And when it finally clicks, the bottleneck moves from typing to thinking. \ No newline at end of file From d7f7470f78fdc76ef695ad857573b394685d883c Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Thu, 5 Feb 2026 10:36:29 +0200 Subject: [PATCH 11/22] Small fix --- _posts/2026-02-03-prompt-engineering-101.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 5c5c42d6..22ded68f 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -31,9 +31,7 @@ The result is predictably poor and useless. When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity" or a sparring partner. Instead, you need to guide the AI like you would instruct a junior developer, or once you are skilled enough in prompting, a peer at the same level. Once I started giving the agent simpler tasks, I found it performed remarkably well! -That mental shift boils down to one rule. Tell, don't ask. - -This is the most important lesson in prompting: **If you don't know how something should be done, the AI doesn't know either.** +That mental shift boils down to one rule: Tell, don't ask. **If you don't know how something should be done, the AI doesn't know either.** AI is fundamentally a guessing machine. Without clear guidance, it will confidently guess and keep guessing. The quality of your output is directly tied to the clarity of your instructions. From 15796feac397dc3826b079b46c2db7ec8cf514db Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Thu, 5 Feb 2026 14:13:48 +0200 Subject: [PATCH 12/22] More improvements --- _posts/2026-02-03-prompt-engineering-101.md | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 22ded68f..46503388 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -29,9 +29,9 @@ A common mistake when starting out is asking the AI to solve problems that are t The result is predictably poor and useless. -When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity" or a sparring partner. Instead, you need to guide the AI like you would instruct a junior developer, or once you are skilled enough in prompting, a peer at the same level. Once I started giving the agent simpler tasks, I found it performed remarkably well! +When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity" or a sparring partner. Instead, you need to guide the AI like you would instruct a junior developer, or once you are skilled enough in prompting, a peer at the same level. Once I started giving the agent simple and clear tasks, I found it performed remarkably well! -That mental shift boils down to one rule: Tell, don't ask. **If you don't know how something should be done, the AI doesn't know either.** +That mental shift boils down to this: Tell, don't ask. **If you don't know how something should be done, the AI doesn't know either.** AI is fundamentally a guessing machine. Without clear guidance, it will confidently guess and keep guessing. The quality of your output is directly tied to the clarity of your instructions. @@ -60,7 +60,7 @@ But what if I told you that even the bad prompt can yield excellent results? Ent I'm a huge advocate of Planning mode. In Agent mode, Copilot immediately starts executing. And since AI is fundamentally a prediction machine, it will confidently keep going even when it's heading in the wrong direction. -Planning mode, on the other hand, forces the AI to stop, think and *explore the codebase* first. It creates a step-by-step plan of what it *intends* to do and asks for clarifying questions. This simple change makes a massive difference. You can catch misunderstandings early, redirect the approach, or refine your requirements before any code is written. And most importantly, it automatically loads exactly the files and references it needs into context so it can complete the task with good results without going off the rails. +Planning mode, on the other hand, forces the AI to stop, think and *explore the codebase* first. It creates a step-by-step plan of what it *intends* to do and asks for clarifying questions. This simple change makes a massive difference. You can catch misunderstandings early, redirect the approach, or refine your requirements before any code is written. And most importantly, it automatically loads exactly the files and references it needs into context so it can complete the task without going off the rails. Let's see how the same "bad prompt" can outperform our carefully crafted "better prompt" when used with Planning mode. @@ -95,11 +95,22 @@ Let's see how the same "bad prompt" can outperform our carefully crafted "better Now we are cooking! You can see exactly what the agent intends to do before it writes a single line of code. You can answer its questions, refine the plan, and polish it until it's exactly right. In my experience, the agent often catches details I overlooked (like the caching pattern example above), which would have caused inconsistencies later. -Here's the magic of context. Once the agent has explored your codebase and built a plan, you don't need to start from scratch for related tasks. The relevant files are already loaded, and the agent remembers what it just did. Your next prompt can be much simpler: +Here's the magic of context. Once the agent has explored your codebase and built a plan, you don't need to start from scratch for related tasks. The relevant files are already loaded, and the agent remembers what it just did. Your next prompts can be much simpler: **You:** > Update `UserProfile.tsx` to call the new endpoint and replace the placeholders with real data. +**You:** +> Add unit tests for the new endpoint. Look at `ProductControllerTests.cs` for reference. + +Note here we still need to point the agent to the right file for reference. + +**You:** +> Actually the CreatedAt timestamp is not needed. Remove it from the response dto and from the UI. + +**You:** +> When the user id does not exist we hit 404 but in this case we want to redirect to the front page. Look at `ProductPage.tsx` for example. + "What about the hard stuff like race conditions, complex state machines, and security edge cases?" These are exactly where good prompting matters most. The AI struggles when you're vague, but if you can enumerate the edge cases, describe the state transitions, or specify the security requirements, it handles them remarkably well. Of course, this assumes you actually understand the problem. But wait... Earlier I said the problem was giving AI too complex problems to solve. Well, complexity itself isn't the enemy. Unclear complexity is. AI can solve any complex task but the challenge is breaking down that complexity into clear, actionable steps. From 15b2e77fff48e8c71fe5b320251c04a819275fdb Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Thu, 5 Feb 2026 15:17:54 +0200 Subject: [PATCH 13/22] Imroved planning agent response --- _posts/2026-02-03-prompt-engineering-101.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 46503388..9ecfef77 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -89,7 +89,7 @@ Let's see how the same "bad prompt" can outperform our carefully crafted "better > > **Questions before I proceed:** > - Should this endpoint require authentication? -> - Do you want pagination for listing multiple users? +> - Can users view other users' profiles? If so, should we use a separate DTO that omits sensitive fields like email? > > Does this plan look correct? From 14582ba9fdbd5a90e9b8f0c6fc7c8f7deed098e4 Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Thu, 5 Feb 2026 15:56:42 +0200 Subject: [PATCH 14/22] Final final v1.21 --- _posts/2026-02-03-prompt-engineering-101.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 9ecfef77..1dd5a116 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -31,7 +31,7 @@ The result is predictably poor and useless. When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity" or a sparring partner. Instead, you need to guide the AI like you would instruct a junior developer, or once you are skilled enough in prompting, a peer at the same level. Once I started giving the agent simple and clear tasks, I found it performed remarkably well! -That mental shift boils down to this: Tell, don't ask. **If you don't know how something should be done, the AI doesn't know either.** +Here's the thing. **If you don't know how something should be done, the AI doesn't know either.** AI is fundamentally a guessing machine. Without clear guidance, it will confidently guess and keep guessing. The quality of your output is directly tied to the clarity of your instructions. @@ -58,7 +58,7 @@ But what if I told you that even the bad prompt can yield excellent results? Ent ## Just Use Planning Mode -I'm a huge advocate of Planning mode. In Agent mode, Copilot immediately starts executing. And since AI is fundamentally a prediction machine, it will confidently keep going even when it's heading in the wrong direction. +I'm a huge advocate of Planning mode. In Agent mode, Copilot immediately starts executing, and often in the wrong direction. Planning mode, on the other hand, forces the AI to stop, think and *explore the codebase* first. It creates a step-by-step plan of what it *intends* to do and asks for clarifying questions. This simple change makes a massive difference. You can catch misunderstandings early, redirect the approach, or refine your requirements before any code is written. And most importantly, it automatically loads exactly the files and references it needs into context so it can complete the task without going off the rails. @@ -103,7 +103,7 @@ Here's the magic of context. Once the agent has explored your codebase and built **You:** > Add unit tests for the new endpoint. Look at `ProductControllerTests.cs` for reference. -Note here we still need to point the agent to the right file for reference. +Notice that we still point the agent to the right reference file when needed. **You:** > Actually the CreatedAt timestamp is not needed. Remove it from the response dto and from the UI. From 4d6e4b9826b46cd37e26e1b02921dd4cecbace0e Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Fri, 6 Feb 2026 10:28:26 +0200 Subject: [PATCH 15/22] Changed the structure a lot --- _posts/2026-02-03-prompt-engineering-101.md | 34 +++++++++++++-------- 1 file changed, 22 insertions(+), 12 deletions(-) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 1dd5a116..6119ddde 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -21,15 +21,25 @@ In this post, I'll share the key lessons from that journey, and hopefully inspir ## So What's the Problem? -A common mistake when starting out is asking the AI to solve problems that are too complex. I made this mistake myself and have observed many others doing the same. The workflow typically goes something like this: -1. There's an issue to fix -2. You debug manually and try to find the root cause -3. You exhaust all your own resources trying to solve it -4. You finally ask the AI +**Bad prompt:** +> My app hangs when users log in but only sometimes and I've tried everything, can you fix it? + +A common mistake when starting out is selecting the wrong problem for AI to solve. I made this mistake myself and have observed many others doing the same. The workflow typically goes something like this: +1. There's an issue to fix. +2. You try the obvious solution (and it doesn't work). +3. You go deeper, read more code, debug, and exhaust all your own resources. +4. You finally ask the AI for help. -The result is predictably poor and useless. +You *might* get an idea from the AI but usually the results are poor and useless. So you dismiss the AI and move on. + +**Better prompt:** +> I'm debugging a login issue where the app sometimes hangs. +> Look at `Services/AuthService.cs`, `Controllers/AuthController.cs` and `Middleware/JwtMiddleware.cs` to understand the login flow. +> Look at `Repositories/UserRepository.cs` to see how we fetch the user from db. +> Here is our logic in the cache layer: `Services/TokenCacheService.cs` +> Analyze the flow and give me suggestions where the issue might be. -When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity" or a sparring partner. Instead, you need to guide the AI like you would instruct a junior developer, or once you are skilled enough in prompting, a peer at the same level. Once I started giving the agent simple and clear tasks, I found it performed remarkably well! +When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity" or a sparring partner. Instead, *you need to guide the AI like you would instruct a junior developer*, or once you are skilled enough in prompting, a peer at the same level. Once I started giving the agent simple and clear tasks, I found it performed remarkably well! Here's the thing. **If you don't know how something should be done, the AI doesn't know either.** @@ -39,7 +49,7 @@ AI is fundamentally a guessing machine. Without clear guidance, it will confiden The key to effective prompting is understanding how AI context works. While the model is trained on vast amounts of data from across the internet, the context provided in your current chat session carries significantly more weight. I initially assumed that since JavaScript dominates AI training data, the model would perform poorly with other languages. This assumption was incorrect. Once you grasp how context influences output, you can achieve excellent results regardless of programming language or tech stack. -Let me illustrate this with a practical example. +Here is another example: **Bad prompt:** > Create a new API endpoint for user profiles. @@ -103,7 +113,7 @@ Here's the magic of context. Once the agent has explored your codebase and built **You:** > Add unit tests for the new endpoint. Look at `ProductControllerTests.cs` for reference. -Notice that we still point the agent to the right reference file when needed. +*Notice that we still point the agent to the right reference file when needed.* **You:** > Actually the CreatedAt timestamp is not needed. Remove it from the response dto and from the UI. @@ -111,11 +121,11 @@ Notice that we still point the agent to the right reference file when needed. **You:** > When the user id does not exist we hit 404 but in this case we want to redirect to the front page. Look at `ProductPage.tsx` for example. -"What about the hard stuff like race conditions, complex state machines, and security edge cases?" These are exactly where good prompting matters most. The AI struggles when you're vague, but if you can enumerate the edge cases, describe the state transitions, or specify the security requirements, it handles them remarkably well. Of course, this assumes you actually understand the problem. +*"What about the hard stuff like race conditions, complex state machines, and security edge cases?"* -But wait... Earlier I said the problem was giving AI too complex problems to solve. Well, complexity itself isn't the enemy. Unclear complexity is. AI can solve any complex task but the challenge is breaking down that complexity into clear, actionable steps. +Well... You are not there yet. Automate the code writing on the easy tasks you know how to solve first. AI performs best when you already know what to do. If you know you need to extract this logic as a service and refactor 10 files to use it, let the AI do that. You propably whould have made a copy paste error anyway, or left a missleading comment in. -Once you get the hang of it, you start developing an intuition for when you can give the AI a broad goal versus when you need to spell out every step. +The complex problems are exactly where good prompting matters most. The AI struggles when you're vague, but if you can enumerate the edge cases, describe the state transitions, or specify the security requirements, it handles them remarkably well. Complexity isn't an issue. Unclear complexity is. AI can solve any complex task but the challenge is breaking down that complexity into clear, actionable steps. Again. How would you delegate the task for a junior developer? ## Prompting Is a Skill From a261b8145c775e60de2f0ef4854e4d9fa3dfad64 Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Fri, 6 Feb 2026 11:06:06 +0200 Subject: [PATCH 16/22] New structure works! --- _posts/2026-02-03-prompt-engineering-101.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 6119ddde..61343625 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -13,9 +13,9 @@ tags: - AI in Development --- -Let's be honest. Those who have embraced AI as part of their daily development work have noticed significant improvements in both speed and quality. So the question is no longer "Is AI useful for coding?" but rather "How do I get the most out of it?" +Let's be honest. Those who have embraced AI as part of their daily development work have noticed significant improvements in both speed and *quality* (yes, we are *not* talking about vibe coding). So the question is no longer "Is AI useful for coding?" but rather "How do I get the most out of it?" -Why do some developers see tremendous benefits while others end up with spaghetti code? I took on a challenge at the end of last year to only work by prompting, in order to learn the ins and outs of AI-assisted development. +Why do some developers see tremendous performance benefits while others end up with spaghetti code? I took on a challenge at the end of last year to only work by prompting, in order to learn the ins and outs of AI-assisted development. In this post, I'll share the key lessons from that journey, and hopefully inspire you to give it (another) try. @@ -30,7 +30,7 @@ A common mistake when starting out is selecting the wrong problem for AI to solv 3. You go deeper, read more code, debug, and exhaust all your own resources. 4. You finally ask the AI for help. -You *might* get an idea from the AI but usually the results are poor and useless. So you dismiss the AI and move on. +You *might* get a useful hint, but more often the results miss the mark entirely. So you dismiss the AI and go back to debugging by hand. **Better prompt:** > I'm debugging a login issue where the app sometimes hangs. @@ -108,7 +108,7 @@ Now we are cooking! You can see exactly what the agent intends to do before it w Here's the magic of context. Once the agent has explored your codebase and built a plan, you don't need to start from scratch for related tasks. The relevant files are already loaded, and the agent remembers what it just did. Your next prompts can be much simpler: **You:** -> Update `UserProfile.tsx` to call the new endpoint and replace the placeholders with real data. +> Update the UI to call the new endpoint and replace the placeholders with real data. **You:** > Add unit tests for the new endpoint. Look at `ProductControllerTests.cs` for reference. @@ -123,17 +123,17 @@ Here's the magic of context. Once the agent has explored your codebase and built *"What about the hard stuff like race conditions, complex state machines, and security edge cases?"* -Well... You are not there yet. Automate the code writing on the easy tasks you know how to solve first. AI performs best when you already know what to do. If you know you need to extract this logic as a service and refactor 10 files to use it, let the AI do that. You propably whould have made a copy paste error anyway, or left a missleading comment in. +Well... You are not there yet. Start by automating the easy tasks you already know how to solve. AI performs best when you can clearly describe the outcome. If you know you need to extract logic into a service and refactor 10 files to use it, let the AI do that and save time. You probably would have made a copy-paste error anyway, or left a misleading comment in. -The complex problems are exactly where good prompting matters most. The AI struggles when you're vague, but if you can enumerate the edge cases, describe the state transitions, or specify the security requirements, it handles them remarkably well. Complexity isn't an issue. Unclear complexity is. AI can solve any complex task but the challenge is breaking down that complexity into clear, actionable steps. Again. How would you delegate the task for a junior developer? +But once you've built that foundation, the complex problems are exactly where good prompting shines. The AI struggles when you're vague, but if you can enumerate the edge cases, describe the state transitions, or specify the security requirements, it handles them remarkably well. Complexity isn't the enemy. *Unclear* complexity is. AI can solve any complex task, but the challenge is breaking it down into clear, actionable steps. Ask yourself: how would you delegate this task to a junior developer? ## Prompting Is a Skill -This is what "coding" looks like in my workflow nowadays: going back and forth with the AI to refine the plan until it's right. I get to focus on the big picture and how the pieces fit together. In the end, I design better features, improve the codebase through refactoring, and save time because the code writing is automated. +Once you get the hang of it, this is what "coding" looks like for me now. It's going back and forth with the AI to refine the plan until it's right. I get to focus on the big picture and how the pieces fit together. In the end, I design better features, improve the codebase through refactoring, and save time because the code writing is automated. -But getting here wasn't instant. At first, I felt like an idiot when nothing worked. After my initial attempts, I caught myself thinking "I can code faster by hand than fixing the AI's mistakes." It took about two weeks to break even with manual coding, and another few weeks before the new approach truly clicked. +But getting here wasn't instant. At first, I felt like an idiot when nothing worked. After my initial attempts, I caught myself thinking "I can code faster by hand than fixing the AI's mistakes." It took about two weeks to break even with manual coding, and another few weeks before the new approach finally clicked. -Prompting is a skill just like any other. You have to accept a small ego hit and feel dumb for a bit to make progress. The hardest part is getting started. You don't yet know how to talk to the agent. Your prompts will fail. You'll redo things. A lot. But with each mistake, you learn what works and what doesn't. +Prompting is a skill just like any other. You have to accept a small ego hit and feel dumb for a bit to make progress. The hardest part is getting started **and keep going.** You don't yet know how to talk to the agent. Your prompts will fail. You'll redo things. A lot. But with each mistake, you learn what works and what doesn't. **The point comes eventually when you realize you've done a day's worth of work in minutes without the AI making a single mistake.** After that, there's no going back. @@ -141,7 +141,7 @@ Prompting is a skill just like any other. You have to accept a small ego hit and The world of agentic coding is evolving way too fast for anyone to stay on top of everything. New concepts emerge constantly: [MCP (Model Context Protocol)](https://modelcontextprotocol.io/introduction) lets agents connect to databases, APIs, and external tools. [Agent Skills](https://docs.github.com/en/copilot/concepts/agents/about-agent-skills) give Copilot specialized capabilities for specific tasks. Multi-agent orchestrators like [Gas Town](https://github.com/steveyegge/gastown) let you coordinate 20-30 Claude Code agents working in parallel with persistent work tracking. And [custom agents](https://code.visualstudio.com/docs/copilot/customization/custom-agents) let you create specialized assistants tailored to your workflow. -It can feel overwhelming. If I changed my workflow every time a new tool came up, I wouldn't get any work done. And here's the thing: all of these features are ultimately just different ways to feed better instructions to the model. +It can feel overwhelming. If I changed my workflow every time a new tool came up, I wouldn't get any work done. And here's the thing: all of these features are ultimately just different ways to feed better instructions to the model. It all comes back to context management. My advice? Don't chase every new feature. Focus on mastering the fundamentals: understanding context, writing clear prompts, and using Planning mode. Once you've nailed those, the advanced features will make much more sense. From 19982faa7519a657de67475468a7556193f2e95b Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Fri, 6 Feb 2026 11:29:24 +0200 Subject: [PATCH 17/22] Fine tuning --- _posts/2026-02-03-prompt-engineering-101.md | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 61343625..298149b1 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -15,7 +15,7 @@ tags: Let's be honest. Those who have embraced AI as part of their daily development work have noticed significant improvements in both speed and *quality* (yes, we are *not* talking about vibe coding). So the question is no longer "Is AI useful for coding?" but rather "How do I get the most out of it?" -Why do some developers see tremendous performance benefits while others end up with spaghetti code? I took on a challenge at the end of last year to only work by prompting, in order to learn the ins and outs of AI-assisted development. +Why do some developers see tremendous benefits while others end up with spaghetti code and hallucinations? I took on a challenge at the end of last year to only work by prompting, in order to learn the ins and outs of AI-assisted development. In this post, I'll share the key lessons from that journey, and hopefully inspire you to give it (another) try. @@ -34,9 +34,13 @@ You *might* get a useful hint, but more often the results miss the mark entirely **Better prompt:** > I'm debugging a login issue where the app sometimes hangs. +> > Look at `Services/AuthService.cs`, `Controllers/AuthController.cs` and `Middleware/JwtMiddleware.cs` to understand the login flow. +> > Look at `Repositories/UserRepository.cs` to see how we fetch the user from db. +> > Here is our logic in the cache layer: `Services/TokenCacheService.cs` +> > Analyze the flow and give me suggestions where the issue might be. When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity" or a sparring partner. Instead, *you need to guide the AI like you would instruct a junior developer*, or once you are skilled enough in prompting, a peer at the same level. Once I started giving the agent simple and clear tasks, I found it performed remarkably well! @@ -58,8 +62,11 @@ This prompt would likely result in unexpected changes across your codebase and g **Better prompt:** > Create a new API endpoint for fetching user profiles. +> > Look at `Controllers/ProductController.cs` for reference on how we structure our endpoints and routing attributes. +> > Look at `ClientApp/src/views/UserProfile.tsx` and see what placeholders we are using to deduce what fields should be returned. +> > Follow the same patterns for error handling and response formatting. The second prompt is more effective because it provides the AI with concrete reference points from your actual codebase. Instead of leaving the AI to guess your project's conventions, you're explicitly pointing it to existing implementations. The AI can now read those files, understand your patterns for routing, controller structure, and error handling, and produce code that fits seamlessly into your project. @@ -143,7 +150,7 @@ The world of agentic coding is evolving way too fast for anyone to stay on top o It can feel overwhelming. If I changed my workflow every time a new tool came up, I wouldn't get any work done. And here's the thing: all of these features are ultimately just different ways to feed better instructions to the model. It all comes back to context management. -My advice? Don't chase every new feature. Focus on mastering the fundamentals: understanding context, writing clear prompts, and using Planning mode. Once you've nailed those, the advanced features will make much more sense. +My advice is to tune out the noise. First focus on mastering the fundamentals: understanding context, writing clear prompts, and using Planning mode. Once you've nailed those, the advanced features will make much more sense. ## Getting Started From f6fba4b0d4dec19077d826fd644eb59a80b2fa60 Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Fri, 6 Feb 2026 12:18:16 +0200 Subject: [PATCH 18/22] Proofreading --- _posts/2026-02-03-prompt-engineering-101.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 298149b1..3147b29b 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -30,7 +30,7 @@ A common mistake when starting out is selecting the wrong problem for AI to solv 3. You go deeper, read more code, debug, and exhaust all your own resources. 4. You finally ask the AI for help. -You *might* get a useful hint, but more often the results miss the mark entirely. So you dismiss the AI and go back to debugging by hand. +You *might* get a useful hint, but more often than not, the results miss the mark entirely. So you dismiss the AI and go back to debugging by hand. **Better prompt:** > I'm debugging a login issue where the app sometimes hangs. @@ -43,9 +43,9 @@ You *might* get a useful hint, but more often the results miss the mark entirely > > Analyze the flow and give me suggestions where the issue might be. -When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity" or a sparring partner. Instead, *you need to guide the AI like you would instruct a junior developer*, or once you are skilled enough in prompting, a peer at the same level. Once I started giving the agent simple and clear tasks, I found it performed remarkably well! +When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity" or a sparring partner. Instead, *you need to guide the AI like you would instruct a junior developer*. Once you are skilled enough in prompting, you can treat it more like a peer at the same level as you. Once I started giving the agent simple and clear tasks, I found it performed remarkably well! -Here's the thing. **If you don't know how something should be done, the AI doesn't know either.** +Here's the thing: **if you don't know how something should be done, the AI doesn't know either.** AI is fundamentally a guessing machine. Without clear guidance, it will confidently guess and keep guessing. The quality of your output is directly tied to the clarity of your instructions. @@ -77,7 +77,7 @@ But what if I told you that even the bad prompt can yield excellent results? Ent I'm a huge advocate of Planning mode. In Agent mode, Copilot immediately starts executing, and often in the wrong direction. -Planning mode, on the other hand, forces the AI to stop, think and *explore the codebase* first. It creates a step-by-step plan of what it *intends* to do and asks for clarifying questions. This simple change makes a massive difference. You can catch misunderstandings early, redirect the approach, or refine your requirements before any code is written. And most importantly, it automatically loads exactly the files and references it needs into context so it can complete the task without going off the rails. +Planning mode, on the other hand, forces the AI to stop, think, and *explore the codebase* first. It creates a step-by-step plan of what it *intends* to do and asks for clarifying questions. This simple change makes a massive difference. You can catch misunderstandings early, redirect the approach, or refine your requirements before any code is written. And most importantly, it automatically loads exactly the files and references it needs into context so it can complete the task without going off the rails. Let's see how the same "bad prompt" can outperform our carefully crafted "better prompt" when used with Planning mode. @@ -130,7 +130,7 @@ Here's the magic of context. Once the agent has explored your codebase and built *"What about the hard stuff like race conditions, complex state machines, and security edge cases?"* -Well... You are not there yet. Start by automating the easy tasks you already know how to solve. AI performs best when you can clearly describe the outcome. If you know you need to extract logic into a service and refactor 10 files to use it, let the AI do that and save time. You probably would have made a copy-paste error anyway, or left a misleading comment in. +Well... you're not there yet. Start by automating the easy tasks you already know how to solve. AI performs best when you can clearly describe the outcome. If you know you need to extract logic into a service and refactor 10 files to use it, let the AI do that and save time. You probably would've made a copy-paste error anyway, or left a misleading comment in. But once you've built that foundation, the complex problems are exactly where good prompting shines. The AI struggles when you're vague, but if you can enumerate the edge cases, describe the state transitions, or specify the security requirements, it handles them remarkably well. Complexity isn't the enemy. *Unclear* complexity is. AI can solve any complex task, but the challenge is breaking it down into clear, actionable steps. Ask yourself: how would you delegate this task to a junior developer? @@ -140,7 +140,7 @@ Once you get the hang of it, this is what "coding" looks like for me now. It's g But getting here wasn't instant. At first, I felt like an idiot when nothing worked. After my initial attempts, I caught myself thinking "I can code faster by hand than fixing the AI's mistakes." It took about two weeks to break even with manual coding, and another few weeks before the new approach finally clicked. -Prompting is a skill just like any other. You have to accept a small ego hit and feel dumb for a bit to make progress. The hardest part is getting started **and keep going.** You don't yet know how to talk to the agent. Your prompts will fail. You'll redo things. A lot. But with each mistake, you learn what works and what doesn't. +Prompting is a skill just like any other. You have to accept a small ego hit and feel dumb for a bit to make progress. The hardest part is getting started **and keeping going.** You don't yet know how to talk to the agent. Your prompts will fail. You'll redo things. A lot. But with each mistake, you learn what works and what doesn't. **The point comes eventually when you realize you've done a day's worth of work in minutes without the AI making a single mistake.** After that, there's no going back. @@ -148,7 +148,7 @@ Prompting is a skill just like any other. You have to accept a small ego hit and The world of agentic coding is evolving way too fast for anyone to stay on top of everything. New concepts emerge constantly: [MCP (Model Context Protocol)](https://modelcontextprotocol.io/introduction) lets agents connect to databases, APIs, and external tools. [Agent Skills](https://docs.github.com/en/copilot/concepts/agents/about-agent-skills) give Copilot specialized capabilities for specific tasks. Multi-agent orchestrators like [Gas Town](https://github.com/steveyegge/gastown) let you coordinate 20-30 Claude Code agents working in parallel with persistent work tracking. And [custom agents](https://code.visualstudio.com/docs/copilot/customization/custom-agents) let you create specialized assistants tailored to your workflow. -It can feel overwhelming. If I changed my workflow every time a new tool came up, I wouldn't get any work done. And here's the thing: all of these features are ultimately just different ways to feed better instructions to the model. It all comes back to context management. +It can feel overwhelming. If I changed my workflow every time a new tool came up, I wouldn't get any work done. And it all boils down to context management: these features are just different ways to feed better instructions to the model. My advice is to tune out the noise. First focus on mastering the fundamentals: understanding context, writing clear prompts, and using Planning mode. Once you've nailed those, the advanced features will make much more sense. From 463c7bd85d9f17240e8d65072392a782a182da31 Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Fri, 6 Feb 2026 12:59:08 +0200 Subject: [PATCH 19/22] Removed conclusion --- _posts/2026-02-03-prompt-engineering-101.md | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 3147b29b..11494e71 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -148,20 +148,10 @@ Prompting is a skill just like any other. You have to accept a small ego hit and The world of agentic coding is evolving way too fast for anyone to stay on top of everything. New concepts emerge constantly: [MCP (Model Context Protocol)](https://modelcontextprotocol.io/introduction) lets agents connect to databases, APIs, and external tools. [Agent Skills](https://docs.github.com/en/copilot/concepts/agents/about-agent-skills) give Copilot specialized capabilities for specific tasks. Multi-agent orchestrators like [Gas Town](https://github.com/steveyegge/gastown) let you coordinate 20-30 Claude Code agents working in parallel with persistent work tracking. And [custom agents](https://code.visualstudio.com/docs/copilot/customization/custom-agents) let you create specialized assistants tailored to your workflow. -It can feel overwhelming. If I changed my workflow every time a new tool came up, I wouldn't get any work done. And it all boils down to context management: these features are just different ways to feed better instructions to the model. +It can feel overwhelming. If I changed my workflow every time a new tool came up, I wouldn't get any work done. And it all boils down to context management: these features are just different ways to feed better instructions to the model. That said, model choice does matter. Models have improved dramatically, and in my opinion, **Claude Opus 4.5** is currently the best for coding. My advice is to tune out the noise. First focus on mastering the fundamentals: understanding context, writing clear prompts, and using Planning mode. Once you've nailed those, the advanced features will make much more sense. -## Getting Started - -This is what you need to get going: - -1. Get a GitHub Copilot license from [IT Services](https://it.services.solita.fi/support/catalog/items/134) -2. Install the [GitHub Copilot Chat](https://marketplace.visualstudio.com/items?itemName=GitHub.copilot-chat) extension in VS Code -3. Log in with your Solita GitHub account - -That's it. You should now have the Copilot chat panel on the right side of VS Code. You can switch between Agent and Plan modes using the dropdown, and choose which model to use. In my opinion, the best coding model right now is **Claude Opus 4.5.** - ## Conclusion AI-assisted development isn't magic, and it's not going to replace you. It's a tool that enables you to focus on solving the actual problem and helps you save time by automating the coding part. From 32ae0c19b4ebd3522f1f998c59dbe1cf1e4480c1 Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Fri, 6 Feb 2026 14:14:20 +0200 Subject: [PATCH 20/22] Paragraph about context rot --- _posts/2026-02-03-prompt-engineering-101.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 11494e71..0dd7ab49 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -128,6 +128,8 @@ Here's the magic of context. Once the agent has explored your codebase and built **You:** > When the user id does not exist we hit 404 but in this case we want to redirect to the front page. Look at `ProductPage.tsx` for example. +A word of warning about [context rot](https://research.trychroma.com/context-rot). AI memory behaves a lot like human memory: it retains what was discussed at the beginning and end of a conversation, but the middle gets hazy. If you keep going in the same chat session for too long, the agent becomes overwhelmed with too much information and starts getting confused. A good habit is to start a fresh chat session for each new feature, keeping the context focused on the task at hand. + *"What about the hard stuff like race conditions, complex state machines, and security edge cases?"* Well... you're not there yet. Start by automating the easy tasks you already know how to solve. AI performs best when you can clearly describe the outcome. If you know you need to extract logic into a service and refactor 10 files to use it, let the AI do that and save time. You probably would've made a copy-paste error anyway, or left a misleading comment in. From 2c53fa818a950213704549b5024508f12bd568be Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Fri, 6 Feb 2026 14:54:28 +0200 Subject: [PATCH 21/22] Fixed factual mistake on agent mode --- _posts/2026-02-03-prompt-engineering-101.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 0dd7ab49..8b80ce14 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -75,7 +75,7 @@ But what if I told you that even the bad prompt can yield excellent results? Ent ## Just Use Planning Mode -I'm a huge advocate of Planning mode. In Agent mode, Copilot immediately starts executing, and often in the wrong direction. +I'm a huge advocate of Copilot's Planning mode. In Agent mode, Copilot eagerly starts executing, often in the wrong direction. Planning mode, on the other hand, forces the AI to stop, think, and *explore the codebase* first. It creates a step-by-step plan of what it *intends* to do and asks for clarifying questions. This simple change makes a massive difference. You can catch misunderstandings early, redirect the approach, or refine your requirements before any code is written. And most importantly, it automatically loads exactly the files and references it needs into context so it can complete the task without going off the rails. From 8278e3883b118cab56d5d9f3b66632b2d17276f9 Mon Sep 17 00:00:00 2001 From: Markus Kumpulainen Date: Fri, 6 Feb 2026 15:18:04 +0200 Subject: [PATCH 22/22] Expanded the main idea --- _posts/2026-02-03-prompt-engineering-101.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2026-02-03-prompt-engineering-101.md b/_posts/2026-02-03-prompt-engineering-101.md index 8b80ce14..f201d439 100644 --- a/_posts/2026-02-03-prompt-engineering-101.md +++ b/_posts/2026-02-03-prompt-engineering-101.md @@ -45,7 +45,7 @@ You *might* get a useful hint, but more often than not, the results miss the mar When I set the prompting challenge for myself, I quickly realized that using AI effectively requires a mental shift away from thinking of it as an "all-knowing entity" or a sparring partner. Instead, *you need to guide the AI like you would instruct a junior developer*. Once you are skilled enough in prompting, you can treat it more like a peer at the same level as you. Once I started giving the agent simple and clear tasks, I found it performed remarkably well! -Here's the thing: **if you don't know how something should be done, the AI doesn't know either.** +Here's the thing: **if you don't know how something should be done, the AI doesn't know either.** Or rather, if you don't know what you want from the AI, how can you expect it to deliver? AI is fundamentally a guessing machine. Without clear guidance, it will confidently guess and keep guessing. The quality of your output is directly tied to the clarity of your instructions.