Lesson 02 – Prompt Engineering¶
Prompt engineering is the craft of writing inputs that reliably produce high‑quality outputs from a language model. Even small changes in how you phrase a request can significantly improve results.
This lesson introduces the essential components of an effective prompt, walks through common prompting strategies, and shows how to iterate and troubleshoot.
1. The Anatomy of a Prompt¶
Most effective prompts consist of three core elements:
-
Persona – Specifies who the model should act as. This shapes tone, expertise, and perspective. Example: “You are a friendly technical support agent.”
-
Context – Provides relevant background, goals, or constraints. Example: “The user is running Python 3.10 on Windows.”
-
Examples – Shows the model what kind of input/output pairs you expect. These are especially useful for formatting and tone. Example: “Input: 2 + 2 → Output: 4”
Each element helps reduce ambiguity and guide the model toward your intent.
flowchart TD
A(Persona) --> B(Context)
B --> C(Examples)
C --> D(Resulting Answer)
Exercise: Deconstruct the Prompt¶
Label each part of this prompt as persona, context, or example:
"You are an experienced resume writer. The user is applying for a junior data analyst role. Rewrite the following bullet to be more action‑oriented: 'Responsible for creating weekly reports.'"
Then, rewrite the prompt so each component is explicitly separated.
2. Prompting Techniques¶
Different prompting styles offer different advantages depending on the task.
2.1 Zero‑Shot Prompting¶
You give the model just the task—no examples or extra context. It's quick but less reliable for nuanced outputs.
Translate to French: "Where is the train station?"
2.2 Few‑Shot Prompting¶
You provide one or more examples to guide the model’s style, structure, or reasoning.
Q: What is 2 + 2?
A: 4
Q: What is 3 + 5?
A: 8
Q: What is 10 + 4?
A:
This primes the model to respond in a consistent format.
2.3 Chain‑of‑Thought Prompting¶
You explicitly ask the model to reason step by step before giving a final answer. This often improves accuracy, especially for logic or math problems.
How many letters are in the word "developer"?
Explain your reasoning, then provide just the number.
3. How to Iterate on a Prompt¶
Prompts usually need tuning. Don’t expect perfection on the first try.
A typical workflow:
- Run the prompt. Look closely at what works—and what doesn’t.
- Revise the prompt. Add context, clarify language, or tweak examples.
- Compare results. Check whether the new version performs better.
Even small adjustments—like reordering instructions or being more specific—can lead to big improvements.
sequenceDiagram
participant User
participant LLM
User->>LLM: Initial prompt
LLM-->>User: Response with issues
User->>LLM: Refined prompt
LLM-->>User: Improved output
4. Common Prompting Pitfalls¶
As you experiment, watch for these common failure modes:
| Failure Mode | Description | Fix |
|---|---|---|
| Hallucinated Facts | The model invents details. | Ask it to cite sources or say “I don’t know.” |
| Ambiguous Prompts | Vague wording causes unpredictable answers. | Be clear about format, tone, and scope. |
| Ignored Constraints | The model skips word counts or formatting rules. | Reinforce constraints at the end of the prompt and in examples. |
| Repetitive Output | The model repeats phrases or ideas. | Lower temperature or supply a partial completion. |
Understanding these patterns helps you debug prompts more effectively.
5. Practice Exercises¶
Put your skills to work with these hands-on tasks.
✍️ Exercise 1: Iterative Rewrite¶
Start with a simple prompt:
"Explain AI to me."
Then, refine it across three iterations to make the output:
- Concise
- Friendly
- Appropriate for a high-school student
Record each prompt version and note how the output improves.
🛠️ Exercise 2: Clarify the Request¶
Turn this vague instruction:
"Tell the computer to clean up files."
Into a specific, actionable command (e.g., a shell command or script) with appropriate safety checks.
✅ See sample solutions in solutions/02_prompt_exercises_solutions.md
6. Prompt Examples Reference¶
Want reusable templates? Browse the Prompt Examples Reference for downloadable techniques and sample prompts used in this lesson.
Recap¶
By understanding how prompts are structured—and how to refine them—you’ll gain much greater control over model responses. Whether you’re generating code, drafting content, or answering questions, thoughtful prompt design is a powerful multiplier.
In the next lesson, we’ll explore the OpenAI API to put these prompting techniques into practice.
Mastering prompt engineering gives you control over model outputs. Building on Lesson 01 – Introduction to Large Language Models, which showed that LLMs are powerful yet fallible predictive engines, you're ready for Lesson 03 – Using the OpenAI API, where you'll implement prompts through code.