Why prompting matters
Two people give ChatGPT the same task: "write an email to my boss asking for leave." One gets a generic, unusable paragraph. The other gets a polished, ready-to-send email. The only difference? The way they asked.
Prompt engineering is the skill of asking AI in a way that reliably gets you what you want. It's not a trick — it's a discipline. Most people who complain "AI is useless" are bad at prompting. This module fixes that.
The CO-STAR framework (memorise this)
CO-STAR is the clearest prompting framework I know. Six letters, six ingredients. Use them all and your output quality jumps 3x overnight.
| Letter | Stands for | What to include |
|---|---|---|
| C | Context | Background. Who are you? What's the situation? |
| O | Objective | The specific task. What do you need done? |
| S | Style | Professional? Casual? Punchy? Academic? |
| T | Tone | Warm? Formal? Stern? Funny? |
| A | Audience | Who will read this? Parent? Boss? Colleague? Student? |
| R | Response format | Email? Bullet list? Table? 500 words? 3 paragraphs? |
Worked example: a teacher asks AI to help plan a lesson
This gets a generic, unusable response because the AI has no context. It doesn't know the grade, the term, the sub-strand, how long the lesson is, or what resources you have.
Context: I'm a Grade 7 Mathematics teacher in a Kenyan public school following the CBC (now CBE) curriculum. My class has 45 learners with mixed ability. Term 1 is ending and we're on the "Fractions" sub-strand.
Objective: Design a single 35-minute lesson that introduces "adding fractions with different denominators."
Style: Practical, learner-centred, activity-based.
Tone: Professional — I'll read this to my HOD.
Audience: The lesson plan is for me; the lesson itself is for Grade 7 learners who understand same-denominator addition.
Response format: KICD-style lesson plan with sections for: Specific Learning Outcomes, Key Inquiry Questions, Learning Resources, Introduction (5 min), Lesson Development (25 min), Conclusion (5 min), Assessment, and Reflection.
That second prompt will produce a genuinely usable lesson plan. Same AI, totally different output quality.
The five rookie mistakes (stop making these)
- Being vague. "Make it better" is useless. "Rewrite this in active voice and cut by 30%" is actionable.
- Not specifying output format. If you want a bulleted list, say so. AI defaults to prose.
- Not giving examples. If you have a previous email you liked the tone of, paste it. "Match the tone of the email below."
- Asking for everything in one shot. Complex tasks should be broken into steps. Ask for an outline first, then a draft, then a polish.
- Accepting the first answer. AI's first reply is usually a 6/10. "Good, but can you make it more [X]?" usually gets it to an 8/10. Iterate.
Your companion library — 20 ready-to-use CO-STAR templates
Before you try writing your own, study 20 battle-tested prompts we built for this module — organised by role (teachers, students, parents, professionals, small business). Every template is copy-pasteable and uses the CO-STAR framework.
Try this now (5 minutes, optional)
Open ChatGPT, Claude, or Gemini. Give it this exact CO-STAR prompt and see what you get:
Objective: Give me 5 practical activities I can do with my child at home in the next 2 weeks to improve comprehension.
Style: Warm, practical, no jargon.
Tone: Encouraging.
Audience: A working parent with limited time (30 min/day).
Response format: Numbered list of 5 activities. Each activity: name, materials needed, step-by-step instructions, time required.
The output should be immediately usable. If it's not, the model is wrong — not you.