AI Engineering Curriculum
Phase 0: Foundations·2 min read

Module 0.2

How to Prompt Well

System Prompts

The system prompt is your instruction set to the model - it runs before any user message and shapes everything: the model's role, its rules, its personality, its constraints.

Python
client.messages.create( model="claude-opus-4-6-20250929", system="You are a financial analyst. Always cite sources. Never give investment advice.", messages=[{"role": "user", "content": "Tell me about NVIDIA stock"}] )

Think of it like a job description handed to an employee before their first day. The better written it is, the more reliably they perform.

Why it matters for agents specifically:

  • Your agent's entire behavior is defined here - its tools, its rules, its boundaries
  • CLAUDE.md files work because they get injected into the system prompt automatically
  • A poorly written system prompt is the #1 cause of unreliable agent behavior

Prompt Engineering Principles

1. Be specific Vague instructions produce vague output. Define format, length, tone, constraints explicitly.

VagueSpecific
"Summarize this""Summarize this in 3 bullet points, each under 15 words"
"Write some code""Write a Python function with type hints and a docstring"

2. Give context The model only knows what's in its context window. If it needs background to do its job well, you have to include it. It can't look anything up on its own (without a tool).

3. Use examples (few-shot prompting) Showing the model what good output looks like often works better than describing it.

Bad:  "Classify this review as positive or negative."
Good: "Classify this review as positive or negative.
       Example: 'Great product!' → positive
       Example: 'Broke after a week' → negative
       Now classify: 'Delivery was slow but quality is excellent'"

4. Chain of thought Telling the model to think before answering improves accuracy on complex tasks.

"Think through this step by step before giving your final answer."

This works because it forces the model to "show its work" in tokens before committing to an answer - and each token it generates becomes context for the next one.

5. Separate instruction from data Make it unambiguous what's your instruction vs. what's the content being processed.

"Summarize the following article. Do not include your opinion.

Article:
---
[article text here]
---"

Why Prompts Fail

FailureRoot cause
Model ignores a ruleRule was buried in a long prompt, or contradicted elsewhere
Inconsistent output formatFormat wasn't specified precisely enough
Model "hallucinates"Asked for info it doesn't have without being told to say "I don't know"
Agent goes off-taskSystem prompt didn't constrain scope clearly

The pattern: Almost every prompt failure traces back to ambiguity. The model filled in the gap with its best guess.


Sources