Foundations

Building Chains and Prompt Templates

Lesson 2 of 4 Estimated Time 45 min

Building Chains and Prompt Templates

Prompts are the interface between your application and the model. Well-designed prompts produce better results. In this lesson, you’ll master prompt templates, create sophisticated chains, and optimize for consistency.

Prompt Templates Explained

A prompt template is a parameterized prompt. Instead of hardcoding the content, you define variables that get filled in at runtime:

from langchain_core.prompts import PromptTemplate

# Simple template with one variable
template = "Write a {num_words}-word essay about {topic}"

prompt = PromptTemplate(
    input_variables=["num_words", "topic"],
    template=template
)

# Use it
output = prompt.format(num_words=100, topic="artificial intelligence")
print(output)
# Output: "Write a 100-word essay about artificial intelligence"

Templates can be complex with formatting instructions:

template = """You are an expert in {field}.
Answer the following question clearly and concisely.
Question: {question}
Make sure your answer includes at least one example."""

prompt = PromptTemplate(
    input_variables=["field", "question"],
    template=template
)

result = prompt.format(
    field="machine learning",
    question="What is overfitting?"
)
print(result)

Chat Prompt Templates

Chat models work with structured messages (system, user, assistant). Use ChatPromptTemplate:

from langchain_core.prompts import ChatPromptTemplate

chat_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant expert in {field}."),
    ("human", "{question}"),
])

messages = chat_prompt.format_messages(
    field="physics",
    question="Explain quantum entanglement"
)

# messages is a list of message objects
for msg in messages:
    print(f"{msg.type}: {msg.content}")

This creates a proper message structure that chat models expect, with distinct roles.

Dynamic Prompts

Sometimes you need to generate prompts based on logic. Use FewShotPromptTemplate:

from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate

# Examples to show the model
examples = [
    {
        "input": "happy",
        "output": "sad"
    },
    {
        "input": "tall",
        "output": "short"
    },
    {
        "input": "big",
        "output": "small"
    }
]

# Template for each example
example_prompt = PromptTemplate(
    input_variables=["input", "output"],
    template="Word: {input}\nOpposite: {output}"
)

# Combine into few-shot prompt
few_shot_prompt = FewShotPromptTemplate(
    examples=examples,
    example_prompt=example_prompt,
    suffix="Word: {input}\nOpposite:",
    input_variables=["input"]
)

result = few_shot_prompt.format(input="fast")
print(result)
# Shows examples then asks model to continue the pattern

This demonstrates the pattern to the model through examples—powerful for consistency.

Sequential Chains

Chains can feed output from one step into the next. Use LCEL to compose them:

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

model = ChatOpenAI(model="gpt-3.5-turbo")

# Step 1: Summarize
summary_prompt = ChatPromptTemplate.from_template(
    "Summarize this text in 1-2 sentences:\n{text}"
)
summary_chain = summary_prompt | model | StrOutputParser()

# Step 2: Extract key concepts from summary
concept_prompt = ChatPromptTemplate.from_template(
    "Extract 3 key concepts from this text:\n{text}"
)
concept_chain = concept_prompt | model | StrOutputParser()

# Step 3: Generate questions from concepts
question_prompt = ChatPromptTemplate.from_template(
    "Generate 2 study questions about:\n{concepts}"
)
question_chain = question_prompt | model | StrOutputParser()

# Combine into one chain
# Define custom runnable to chain all steps
from langchain_core.runnables import RunnablePassthrough, RunnableLambda

full_chain = (
    summary_chain
    | RunnableLambda(lambda x: concept_chain.invoke({"text": x}))
    | RunnableLambda(lambda x: question_chain.invoke({"concepts": x}))
)

text = "Machine learning enables computers to learn from data without explicit programming..."
result = full_chain.invoke({"text": text})
print(result)

Parallel Chains

Run multiple chains in parallel and combine results:

from langchain_core.runnables import RunnableParallel

# Two independent analyses
sentiment_chain = sentiment_prompt | model | StrOutputParser()
summary_chain = summary_prompt | model | StrOutputParser()

# Run both in parallel
parallel_chain = RunnableParallel({
    "sentiment": sentiment_chain,
    "summary": summary_chain
})

results = parallel_chain.invoke({"text": "I loved this product!"})
print(results)
# {"sentiment": "positive", "summary": "Product was enjoyed"}

This is efficient—both chains execute concurrently rather than sequentially.

Conditional Logic in Chains

Make chains branch based on input conditions:

from langchain_core.runnables import RunnableIfElse, RunnableLambda

# Classify input
def classify_query(text):
    return "technical" if any(word in text.lower() for word in ["error", "bug", "api"]) else "general"

# Technical support chain
technical_prompt = ChatPromptTemplate.from_template(
    "You are a technical expert. Answer: {question}"
)
technical_chain = technical_prompt | model | StrOutputParser()

# General chat chain
general_prompt = ChatPromptTemplate.from_template(
    "You are a friendly assistant. Answer: {question}"
)
general_chain = general_prompt | model | StrOutputParser()

# Branch based on classification
conditional_chain = RunnableIfElse(
    predicate=lambda x: classify_query(x["question"]) == "technical",
    if_true=technical_chain,
    if_false=general_chain
)

# Use it
result = conditional_chain.invoke({"question": "Why is my API returning 500 errors?"})
print(result)

Handling Variables

Pass data through chains without repeating it:

from langchain_core.runnables import RunnablePassthrough

# Chain that needs multiple pieces of information
chain = (
    {"context": context_retriever, "question": RunnablePassthrough()}
    | combine_prompt
    | model
    | StrOutputParser()
)

# The RunnablePassthrough passes the original input through

Error Handling in Chains

Handle errors gracefully within chains:

from langchain_core.runnables import RunnableTry

# Chain with fallback
try_chain = RunnableTry(
    runnable=primary_chain,
    except_on=(Exception,),
    fallback=fallback_response
)

# Or use try/except
def safe_invoke(chain, inputs):
    try:
        return chain.invoke(inputs)
    except Exception as e:
        print(f"Error: {e}")
        return "I couldn't process that request"

result = safe_invoke(my_chain, {"input": data})

Optimizing Prompts

Write prompts that consistently produce good output:

# Bad: Vague
prompt = "Tell me about AI"

# Better: Specific
prompt = """Explain machine learning in exactly 3 sentences.
Each sentence should cover a different aspect: definition, application, and limitation."""

# Better: With context
prompt = """You are writing for software engineers with no ML background.
Explain machine learning in 3 sentences.
Use a practical code example."""

# Best: With few-shot examples
prompt = ChatPromptTemplate.from_messages([
    ("system", """You extract company names from text.
Examples:
Text: "Apple and Microsoft are tech companies"
Companies: Apple, Microsoft"""),
    ("human", "{text}"),
])

Key principles:

  • Be specific about what you want
  • Provide context about the audience
  • Specify format (JSON, markdown, etc.)
  • Give examples when possible
  • State constraints (length, tone, etc.)

Testing Prompt Variations

Experiment to find what works:

from typing import Callable

class PromptTester:
    """Test different prompt variations."""

    def __init__(self, model):
        self.model = model
        self.results = []

    def test_prompts(self, input_data: str, prompts: dict[str, str]):
        """Test multiple prompts on same input."""
        for name, prompt_template in prompts.items():
            try:
                result = self.model.invoke(prompt_template.format(input=input_data))
                self.results.append({
                    "prompt_name": name,
                    "result": result.content,
                    "success": True
                })
            except Exception as e:
                self.results.append({
                    "prompt_name": name,
                    "error": str(e),
                    "success": False
                })

    def get_results(self):
        """Get test results."""
        return self.results

# Usage
tester = PromptTester(model)

prompts = {
    "direct": "Extract entities from: {input}",
    "detailed": "Carefully extract all entities from: {input}\nList each entity type separately.",
    "json": "Extract entities from: {input}\nReturn as JSON.",
}

tester.test_prompts("Apple and Microsoft are companies", prompts)

for result in tester.get_results():
    print(f"{result['prompt_name']}: {result['result']}")

Key Takeaway

Prompt templates parameterize instructions for reusability. ChatPromptTemplate creates proper message structures. Chain components together with LCEL using the pipe operator. Run chains sequentially or in parallel. Use conditionals to branch based on input. Test variations to find what works best for your use case.

Exercises

  1. Prompt templates: Create templates for different tasks (summarization, classification, extraction). Test with various inputs.

  2. Few-shot learning: Build a few-shot prompt that demonstrates a pattern. Verify the model follows it.

  3. Sequential chain: Create a 3-step chain that transforms data at each step.

  4. Parallel execution: Build a chain that runs two analyses in parallel on the same input.

  5. Conditional logic: Create a chain that branches based on input classification.

  6. Prompt optimization: Compare results from 3 different prompt variations on the same task. Identify which works best.