Advanced

Prompt Templates and Dynamic Generation

Lesson 2 of 4 Estimated Time 50 min

Prompt Templates and Dynamic Generation

In the previous lesson, you built pipelines with hardcoded prompts. But real systems need flexible, reusable prompts that adapt to different inputs, contexts, and data sources. This lesson teaches you how to use templating systems to generate prompts dynamically while maintaining quality and consistency.

From Hardcoded Prompts to Templates

When you hardcode prompts, you lose flexibility:

# Bad: Hardcoded prompt
def classify_email(email_text):
    prompt = f"Classify this email as spam or not spam: {email_text}"
    # ...

# Inflexible: You can't change the classification labels or system context
# Hard to reuse for other tasks

Templating enables dynamic, reusable prompts:

# Good: Template-based prompt
CLASSIFICATION_TEMPLATE = """You are an email classifier.
Your task is to classify emails as {categories}.

Email to classify:
{email_text}

Respond with ONLY the classification label."""

def classify_email(email_text, categories):
    prompt = CLASSIFICATION_TEMPLATE.format(
        categories=", ".join(categories),
        email_text=email_text
    )
    # ...

Using Jinja2 Templates

Python’s str.format() works for simple cases, but for complex prompts, use Jinja2, a powerful templating language:

from jinja2 import Template

# Simple substitution
template = Template("The user asked: {{ question }}")
prompt = template.render(question="What is AI?")
# Output: "The user asked: What is AI?"

# Conditionals
template = Template("""
{%- if user_is_premium %}
You have access to all features.
{%- else %}
Some features are limited.
{%- endif %}
""")
prompt = template.render(user_is_premium=True)

# Loops
template = Template("""
You have access to these tools:
{%- for tool in tools %}
  - {{ tool.name }}: {{ tool.description }}
{%- endfor %}
""")
prompt = template.render(tools=[
    {"name": "Calculator", "description": "Perform math"},
    {"name": "Search", "description": "Look up information"}
])

# Filters for text transformation
template = Template("The user said: {{ input | upper }}")
prompt = template.render(input="hello")
# Output: "The user said: HELLO"

LangChain PromptTemplates

For LLM applications, LangChain provides specialized prompt templates:

from langchain.prompts import PromptTemplate, ChatPromptTemplate
from langchain.chat_models import ChatOpenAI

# Simple PromptTemplate
prompt_template = PromptTemplate(
    input_variables=["topic", "audience"],
    template="""Write a paragraph about {topic} for {audience}."""
)

prompt = prompt_template.format(
    topic="machine learning",
    audience="business executives"
)
print(prompt)
# Output: "Write a paragraph about machine learning for business executives."

# Chat template with multiple messages
chat_template = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "Tell me about {topic}"),
    ("assistant", "I'll explain {topic} in simple terms."),
    ("user", "{follow_up_question}")
])

prompt = chat_template.format_prompt(
    topic="neural networks",
    follow_up_question="Can you give an example?"
)

Conditional Logic in Templates

Real prompts often need to adapt based on context:

from jinja2 import Template

ADAPTIVE_TEMPLATE = """You are a customer support agent.

{%- if ticket_priority == "critical" %}
URGENT: This is a critical issue that needs immediate attention.
{%- endif %}

Ticket Summary: {{ ticket_summary }}

{%- if customer_history %}
Customer History: {{ customer_history }}
{%- endif %}

{%- if similar_tickets %}
Similar resolved tickets:
{%- for ticket in similar_tickets %}
  - {{ ticket }}
{%- endfor %}
{%- endif %}

Respond with a helpful solution."""

def generate_support_prompt(ticket_data):
    template = Template(ADAPTIVE_TEMPLATE)
    return template.render(
        ticket_priority=ticket_data.get("priority", "normal"),
        ticket_summary=ticket_data["summary"],
        customer_history=ticket_data.get("customer_history"),
        similar_tickets=ticket_data.get("similar_tickets", [])
    )

# Usage
ticket = {
    "priority": "critical",
    "summary": "Database connection timeout",
    "customer_history": "Long-time customer, first issue",
    "similar_tickets": ["Connection pool exhausted", "Network timeout"]
}

prompt = generate_support_prompt(ticket)
print(prompt)

Dynamic Example Selection

The most powerful templates include few-shot examples selected dynamically based on input similarity:

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np

class DynamicExampleSelector:
    """Select examples similar to the current input."""

    def __init__(self, examples):
        """
        Args:
            examples: List of dicts with 'input' and 'output' keys
        """
        self.examples = examples

        # Vectorize all examples
        texts = [ex["input"] for ex in examples]
        self.vectorizer = TfidfVectorizer(lowercase=True, stop_words='english')
        self.example_vectors = self.vectorizer.fit_transform(texts)

    def select(self, input_text, k=2):
        """Select k most similar examples."""
        input_vector = self.vectorizer.transform([input_text])
        similarities = cosine_similarity(input_vector, self.example_vectors)[0]

        # Get indices of top k examples
        top_indices = np.argsort(similarities)[-k:][::-1]

        return [self.examples[i] for i in top_indices]

# Define your examples
CLASSIFICATION_EXAMPLES = [
    {
        "input": "This product is amazing! Best purchase ever!",
        "output": "positive"
    },
    {
        "input": "Total waste of money. Completely broken.",
        "output": "negative"
    },
    {
        "input": "It's okay, nothing special.",
        "output": "neutral"
    },
    {
        "input": "Love it! Exceeded my expectations.",
        "output": "positive"
    }
]

selector = DynamicExampleSelector(CLASSIFICATION_EXAMPLES)

# When processing a new input
new_input = "Fantastic product, highly recommend it!"
similar_examples = selector.select(new_input, k=2)

print("Selected examples:")
for ex in similar_examples:
    print(f"  Input: {ex['input']}")
    print(f"  Output: {ex['output']}")

# Build prompt with dynamic examples
PROMPT_WITH_EXAMPLES = """Classify the sentiment of each text.

{%- for example in examples %}
Input: {{ example.input }}
Output: {{ example.output }}

{%- endfor %}

Input: {{ new_input }}
Output:"""

from jinja2 import Template
template = Template(PROMPT_WITH_EXAMPLES)
prompt = template.render(
    examples=similar_examples,
    new_input=new_input
)

print("\nGenerated prompt:")
print(prompt)

Runtime Prompt Construction from External Data

Fetch data from databases or APIs to build rich prompts:

import requests
import json
from datetime import datetime

class ContextualPromptBuilder:
    """Build prompts with data fetched at runtime."""

    def __init__(self, llm_client):
        self.client = llm_client

    def fetch_user_context(self, user_id: str) -> dict:
        """Fetch user information from an API."""
        # In real code, this would call a real API
        return {
            "name": "Alice Johnson",
            "subscription": "premium",
            "account_age_days": 847,
            "previous_issues": ["billing", "feature_request"]
        }

    def fetch_knowledge_base(self, query: str) -> list:
        """Fetch relevant articles from knowledge base."""
        # Simulated knowledge base results
        return [
            {
                "title": "Troubleshooting Connection Issues",
                "content": "Check your network settings..."
            }
        ]

    def build_support_prompt(self, user_id: str, issue_description: str) -> str:
        """Build a rich support prompt with context."""

        # Fetch context
        user = self.fetch_user_context(user_id)
        kb_articles = self.fetch_knowledge_base(issue_description)

        # Build prompt
        prompt = f"""You are a support agent helping {user['name']}.

User Profile:
- Subscription: {user['subscription']}
- Account age: {user['account_age_days']} days
- Previous issues: {', '.join(user['previous_issues'])}

Current Issue:
{issue_description}

Relevant Knowledge Base Articles:"""

        for i, article in enumerate(kb_articles, 1):
            prompt += f"\n{i}. {article['title']}\n{article['content']}\n"

        prompt += "\nProvide a helpful response based on user context and KB articles."

        return prompt

# Usage
builder = ContextualPromptBuilder(None)  # In real code, pass actual LLM client
prompt = builder.build_support_prompt(
    user_id="user_123",
    issue_description="I can't connect to the database"
)
print(prompt)

Template Testing and Validation

Before deploying templates, test them thoroughly:

import unittest
from jinja2 import Template, UndefinedError

class TestPromptTemplate(unittest.TestCase):

    def test_all_variables_provided(self):
        """Template renders correctly with all variables."""
        template = Template("Hello {{ name }}, you are {{ age }} years old.")
        result = template.render(name="Alice", age=30)
        self.assertEqual(result, "Hello Alice, you are 30 years old.")

    def test_missing_variable_fails(self):
        """Template raises error if variable is missing."""
        template = Template("Hello {{ name }}")
        with self.assertRaises(UndefinedError):
            template.render()  # Missing 'name'

    def test_conditional_logic(self):
        """Conditional blocks work correctly."""
        template = Template("""
        {%- if is_urgent %}
        URGENT
        {%- else %}
        Normal
        {%- endif %}""")

        self.assertIn("URGENT", template.render(is_urgent=True))
        self.assertIn("Normal", template.render(is_urgent=False))

    def test_loop_with_empty_list(self):
        """Loops handle empty lists gracefully."""
        template = Template("""
        {%- for item in items %}
        - {{ item }}
        {%- else %}
        No items
        {%- endfor %}""")

        result = template.render(items=[])
        self.assertIn("No items", result)

class PromptTemplateValidator:
    """Validate that a template is production-ready."""

    @staticmethod
    def validate_template_string(template_str: str, required_vars: list) -> dict:
        """
        Validate template for common issues.

        Returns:
            Dict with validation results
        """
        results = {
            "valid": True,
            "warnings": [],
            "errors": []
        }

        # Check for unclosed tags
        if template_str.count("{%") != template_str.count("%}"):
            results["errors"].append("Mismatched template tags")
            results["valid"] = False

        # Check for all required variables
        template = Template(template_str)
        template_vars = template.module.__dict__.get("variables", set())

        for var in required_vars:
            if var not in str(template_str):
                results["warnings"].append(f"Required variable '{var}' not found")

        # Check for overly long templates (hard to debug)
        if len(template_str) > 2000:
            results["warnings"].append("Template is quite long; consider breaking it up")

        return results

# Usage
validator = PromptTemplateValidator()
result = validator.validate_template_string(
    "Hello {{ name }}, your status is {{ status }}",
    required_vars=["name", "status"]
)
print(result)

Building a Template Library

Create reusable templates for common tasks:

class PromptLibrary:
    """Central repository for prompt templates."""

    TEMPLATES = {
        "classification": """Classify the following text as one of: {categories}

Text: {input}

Classification:""",

        "summarization": """Summarize the following text in {length} sentences.

Text: {input}

Summary:""",

        "extraction": """Extract {fields} from the following text.

Text: {input}

Respond in JSON format.

Result:""",

        "qa": """Answer the question based on the context provided.

Context: {context}

Question: {question}

Answer:"""
    }

    @classmethod
    def get_template(cls, name: str) -> str:
        """Retrieve a template by name."""
        if name not in cls.TEMPLATES:
            raise ValueError(f"Template '{name}' not found")
        return cls.TEMPLATES[name]

    @classmethod
    def render(cls, template_name: str, **kwargs) -> str:
        """Render a template with provided variables."""
        template_str = cls.get_template(template_name)
        template = Template(template_str)
        return template.render(**kwargs)

# Usage
prompt = PromptLibrary.render(
    "classification",
    categories="positive, negative, neutral",
    input="This product is fantastic!"
)
print(prompt)

Key Takeaway: Templating separates prompt logic from data, making your system more maintainable, testable, and flexible. Dynamic example selection adapts prompts to each input, improving quality without changing code.

Exercise: Build a Dynamic Customer Response System

Create a system that generates personalized customer responses using:

  1. A base Jinja2 template with conditional blocks
  2. Dynamic example selection based on input similarity
  3. Runtime fetching of customer data
  4. Comprehensive template validation

Requirements:

  • Template includes conditionals for customer tier (free/basic/premium)
  • Dynamically selects 2-3 similar support examples
  • Fetches customer name, account age, previous interactions
  • Validates template before rendering
  • Generates different responses for urgent vs. normal tickets

Starter code:

from jinja2 import Template
from datetime import datetime

class CustomerResponseSystem:
    def __init__(self):
        self.example_selector = DynamicExampleSelector(SUPPORT_EXAMPLES)
        self.validator = PromptTemplateValidator()

    def generate_response_prompt(self,
        customer_id: str,
        ticket_summary: str,
        is_urgent: bool = False
    ) -> str:
        """Generate a personalized response prompt."""

        # TODO: Fetch customer data
        # TODO: Select similar examples
        # TODO: Build template
        # TODO: Validate
        # TODO: Render and return
        pass

# Test data
SUPPORT_EXAMPLES = [
    {"input": "billing issue", "output": "Let me check your account..."},
    {"input": "feature request", "output": "Thanks for the suggestion..."},
    # ... more examples
]

system = CustomerResponseSystem()
prompt = system.generate_response_prompt(
    customer_id="cust_456",
    ticket_summary="I was charged twice this month",
    is_urgent=True
)

Extension challenges:

  • Implement template versioning and A/B testing
  • Add multi-language support to templates
  • Create a template editor UI with live preview
  • Track template performance and suggest improvements

By mastering prompt templates, you’ll be able to scale personalized LLM interactions without proportionally increasing code complexity.