Prompt Structure and Formatting
Prompt Structure and Formatting
You’ve learned that clarity matters. Now you’re going to learn the structure that makes clarity possible. How you organize information in a prompt—the order, the formatting, the flow—directly affects how the model processes your request. Think of it as the difference between handing someone a messy pile of ingredients and a recipe card. Both contain the same information, but one is dramatically more useful.
Common Prompt Patterns: Three Proven Structures
There’s no single “correct” way to structure a prompt, but certain patterns have proven effective. Understanding these patterns gives you a toolkit for different situations.
Pattern 1: Instruction-First
Put your main task at the top. This pattern works best when the task is clear and the supporting information is secondary.
TASK: Classify the following customer feedback as positive, neutral, or negative.
STYLE: Be concise. Provide the classification and a 1-sentence explanation.
FEEDBACK:
"""
I loved the product itself, but shipping took 3 weeks. That was frustrating.
"""
ANSWER:
When to use:
- The task is self-contained
- Supporting details are relatively simple
- You want the model to dive in immediately
Advantage: The model immediately knows what it’s doing.
Disadvantage: If context is complex, the model processes it while already focused on the task.
Pattern 2: Context-First
Build context and background before giving the task. This pattern prepares the model’s knowledge distribution before it processes your actual request.
CONTEXT:
You are a technical writer specializing in cloud infrastructure. Your audience
consists of non-technical business stakeholders who need to understand cloud
architecture for budget planning and vendor selection purposes.
KEY CONSTRAINTS:
- Avoid jargon; if necessary, explain it immediately
- Emphasize business value and risk, not technical details
- Assume no prior knowledge of cloud services
- Keep it to 3-4 paragraphs
YOUR TASK:
Explain what a CDN (Content Delivery Network) is and why a company might
choose to invest in one. Include one concrete business benefit.
When to use:
- Context significantly affects the output
- You want the model primed with specific expertise or perspective
- The audience or constraints are complex
Advantage: The model’s knowledge distribution is shaped before it processes the task.
Disadvantage: Takes more tokens; requires patience for setup.
Pattern 3: Example-First (Few-Shot)
Show examples of what you want before asking for it. This pattern uses “teaching by example” rather than explicit instruction.
You will classify customer reviews. Here are examples:
EXAMPLE 1:
Review: "Amazing product! Arrived quickly and exactly as described."
Classification: Positive
EXAMPLE 2:
Review: "It works okay, but I expected better for the price."
Classification: Neutral
EXAMPLE 3:
Review: "Complete waste of money. Broke after two days."
Classification: Negative
NOW CLASSIFY THIS:
Review: "The quality is there, though customer service was slow to respond."
Classification:
When to use:
- The pattern is easier to show than to explain
- You want consistency with a specific style or format
- The task has subtle distinctions
Advantage: The model learns by pattern-matching to examples.
Disadvantage: Takes tokens for examples; only works if examples are representative.
Choosing Between Patterns
Here’s a decision tree:
Is the task extremely clear and simple?
YES → Use Instruction-First
NO → Does context significantly change the answer?
YES → Use Context-First
NO → Would examples make the pattern obvious?
YES → Use Example-First
NO → Use Context-First (safest default)
Using XML/Markdown Structure for Complex Prompts
As prompts get more complex, raw text becomes hard to parse. Structured markup helps both you and the model.
Why Structure Matters for Complex Prompts
Complex prompts often have:
- Multiple types of information (context, rules, examples, input)
- Conditional logic (“if X, then do Y”)
- Multiple steps or subtasks
- Content you want quoted or treated specially
Raw text gets confusing:
I'm a financial analyst. Analyze this company's financial health. Look at
revenue, profit margin, debt. The data is: revenue is up 15% year-over-year,
profit margin is 8%, debt to equity ratio is 0.5. Also consider industry
trends but don't speculate. Format as three bullet points.
Structured format is clearer:
<role>Financial analyst</role>
<task>
Analyze this company's financial health
</task>
<metrics_to_analyze>
- Revenue growth
- Profit margin
- Debt-to-equity ratio
- Industry context
</metrics_to_analyze>
<data>
- Revenue: up 15% year-over-year
- Profit margin: 8%
- Debt-to-equity ratio: 0.5
</data>
<constraints>
- Do NOT speculate about future performance
- Format as 3 bullet points
- Use industry context only, don't extrapolate
</constraints>
XML vs. Markdown: Which to Use?
Use XML when:
- The prompt is complex with many distinct sections
- You want machine-readable structure
- You’re building prompts programmatically
- You want nested information
Example:
<instruction>
Translate the following text to Spanish.
</instruction>
<options>
<style>formal</style>
<audience>business professionals</audience>
<preserve_terminology>product names</preserve_terminology>
</options>
<text>
Please contact our sales team for pricing information.
</text>
Use Markdown when:
- The prompt is moderately complex
- Readability for humans is important
- You’re working in a chat interface
- You want clear visual hierarchy
Example:
## Task
Translate the following text to Spanish.
## Style & Audience
- Style: Formal
- Audience: Business professionals
- Preserve: Product names (don't translate them)
## Text to Translate
Please contact our sales team for pricing information.
Example: Complex Prompt with Structure
Here’s a real-world complex prompt using XML:
<context>
You are a code reviewer for a Python backend team. The team values:
- Clean, readable code
- Comprehensive error handling
- Proper type hints
- Thoughtful comments only (no obvious ones)
</context>
<task>Review this pull request for code quality issues.</task>
<review_focus>
<priority high="true">
- Security vulnerabilities
- Unhandled exceptions
- Performance problems
</priority>
<priority medium="true">
- Type hint completeness
- Code clarity
- Test coverage suggestions
</priority>
<priority low="true">
- Formatting and style (assume automated formatters handle this)
</priority>
</review_focus>
<format>
<structure>
- Summary (1 sentence): Recommend merge or request changes?
- Issues found (numbered list with severity)
- Suggested improvements (3-5 key changes)
- Code snippets for examples
</structure>
<tone>Constructive and encouraging. Assume the author wants to improve.</tone>
</format>
<code_to_review>
```python
def process_user_data(user_id, data):
user = get_user(user_id)
if user:
user.data = data
db.save(user)
return True
return False
</code_to_review>
System Prompts vs. User Prompts vs. Assistant Prefilling
Modern LLM APIs distinguish between different types of prompts:
System Prompt
The system prompt sets up the model’s behavior for an entire conversation. It’s usually set once and stays consistent across multiple user messages.
{
"system": "You are a helpful customer service representative for a software company. You prioritize solving the customer's problem quickly while maintaining a friendly, professional tone. If you don't know something, you say so and offer to find the answer.",
"messages": [
{"role": "user", "content": "My app keeps crashing on startup"}
]
}
System prompts are ideal for:
- Defining the assistant’s personality or role
- Setting global constraints (tone, content guidelines)
- Establishing the assistant’s expertise
- Rules that apply to all interactions
User Prompt
The user prompt is the actual message or question from the user. It can be a single message or part of a conversation history.
{
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the capital of France?"},
{"role": "assistant", "content": "The capital of France is Paris."},
{"role": "user", "content": "What's its population?"}
]
}
User prompts are for:
- The actual task or question
- Current context in a conversation
- Specific inputs to process
Assistant Prefilling
Some APIs let you start an assistant response with specific text, which the model then completes. This is useful for steering output format.
{
"messages": [
{"role": "user", "content": "List 3 benefits of exercise."},
{"role": "assistant", "content": "Here are the 3 key benefits:\n\n1. "}
]
}
The model continues from “1. ” with its answer, and the structure is preserved.
Prefilling is useful for:
- Enforcing specific output formats
- Guaranteeing the response starts a certain way
- Steering the model toward structured responses
Example: Using All Three Together
import anthropic
client = anthropic.Anthropic()
# System prompt: Sets overall behavior
system_prompt = """You are an expert Python tutor. When a student asks a question:
1. Explain the concept clearly with examples
2. Point out common mistakes
3. Suggest practice exercises
Keep explanations concise but thorough."""
# User prompt: The actual question
user_message = "I'm confused about Python decorators. Can you explain them?"
# API call with system prompt
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
system=system_prompt,
messages=[
{"role": "user", "content": user_message}
]
)
print(response.content[0].text)
Working with Output Formats: JSON, Markdown, Tables, Lists
The output format you request affects both structure and usability.
JSON Output
Use JSON when you need structured, machine-readable output.
Parse the following customer feedback and return as JSON:
{
"sentiment": "positive|neutral|negative",
"topics": ["string"],
"urgency": "low|medium|high",
"suggested_action": "string"
}
Feedback: "Love the product, but wish it had offline mode. Otherwise perfect!"
Expected output:
{
"sentiment": "positive",
"topics": ["product_satisfaction", "feature_request"],
"urgency": "low",
"suggested_action": "Consider offline functionality in roadmap"
}
Markdown Output
Use markdown for readable, formatted text that might be displayed as-is.
Write a product feature announcement in Markdown format.
Include:
- An H2 heading
- A 2-sentence overview
- Bullet points of key features
- A code example (if applicable)
- A call-to-action
Feature: New API rate limiting controls
Table Output
Use tables when comparing multiple items across dimensions.
Create a comparison table of these programming languages:
- Python
- Go
- Rust
Columns: Language | Best For | Learning Curve | Performance
Format as Markdown table
Numbered Lists
Use numbered lists for sequential steps or prioritized items.
List the 5 most important considerations when choosing a database,
ordered by importance for a startup MVP.
Format: Numbered list, 1-2 sentences per item.
Template Variables and Reusable Prompt Patterns
As you build prompts you’ll use repeatedly, templates with variables make them reusable.
Simple Template Example
Instead of rewriting similar prompts, create a template:
Prompt Template: Code Review
---
role = [code_reviewer_type]
language = [programming_language]
focus = [review_focus_areas]
constraint = [max_length]
I am a [role] reviewing [language] code.
Please focus on: [focus]
Keep your review to [constraint].
Code to review:
[code_snippet]
---
Concrete example:
role = "senior backend engineer"
language = "Python"
focus = "security and performance"
constraint = "300 words"
code_snippet = "[user's code here]"
# Result: Customized review prompt
Prompt Library Pattern
Build a library of reusable prompts for your workflow:
PROMPT_TEMPLATES = {
"code_review": """
You are a {language} code reviewer with expertise in {expertise}.
Review this code for {focus_areas}.
{constraints}
""",
"writing_improvement": """
Improve this {writing_type} for {audience}.
Current issues to address: {issues}
Keep tone: {tone}
Max {word_count} words.
""",
"data_analysis": """
Analyze {data_type} data.
Key questions: {questions}
Format output as {output_format}.
Assumptions to avoid: {assumptions}
"""
}
When you need a code review, you just fill in the variables instead of writing a new prompt from scratch.
Key Takeaway
The structure and format of your prompt shape how the model processes information. Use proven patterns (instruction-first, context-first, example-first) based on your task. For complex prompts, use structured markup (XML or Markdown). Distinguish between system prompts (global behavior), user prompts (specific tasks), and output formats (JSON, tables, lists). Template variables make prompts reusable across similar tasks.
Exercise: Build a Prompt Template for Product Descriptions
Your task is to create a reusable prompt template for generating product descriptions.
The Challenge
You sell various products (software, hardware, services) and need a template that can generate descriptions for any of them. The template should:
- Be reusable with variable substitution
- Produce consistent, high-quality descriptions
- Handle different product types
- Include examples of what good descriptions look like
Your Template
Create a template with the following structure:
SYSTEM PROMPT / CONTEXT
[Define the role and constraints]
INSTRUCTIONS
[Clear task description]
VARIABLES (to be filled in)
- product_type: [e.g., "B2B SaaS tool"]
- product_name: [e.g., "CloudSync"]
- key_benefits: [e.g., "automated backups, real-time sync"]
- target_audience: [e.g., "small business owners"]
- tone: [e.g., "friendly and professional"]
- length: [e.g., "100-150 words"]
EXAMPLES (show good and bad)
- Good example: [Full example description]
- Bad example: [What to avoid]
OUTPUT FORMAT
[Specify exactly how to format the description]
Complete Your Template
Fill in actual content for each section. When done, test it by substituting different products and seeing if the template produces consistent, quality results.
Bonus Challenge
Create 2-3 variations of your template:
- One for marketing team use
- One for marketplace listings (Amazon, Shopify, etc.)
- One for technical product documentation
Notice how the same product needs different descriptions for different contexts, and adjust your templates accordingly.