Intermediate

Designing Human-AI Interaction

Lesson 2 of 4 Estimated Time 50 min

Designing Human-AI Interaction

Why UX for AI is Different

Traditional UX design is about guiding users through deterministic flows. AI UX is about managing expectations around probabilistic outputs and building trust when outcomes are uncertain.

Users worry: Will this be right? What if it gets it wrong? How do I override it? Why did it make this decision? Can I trust it?

Your job is designing interfaces that build trust, provide transparency, and make it easy for users to work effectively with AI.

Core UX Principles for AI

1. Be Transparent About Uncertainty

Users should know when the AI is confident vs. uncertain.

Pattern: Confidence Indicators

High confidence (90%+):
✓ Email is URGENT
[Acknowledge] [Fix]

Medium confidence (70-90%):
⚠ Email might be URGENT
[Confirm Category] [Different]

Low confidence (<70%):
? Email category unclear; needs review
[Categorize manually]

When to show confidence:

  • Always for critical decisions
  • For borderline cases where user might want to review
  • Never hide when uncertain (that’s when it most needs review)

How to communicate confidence:

  • Numerical: “92% confidence”
  • Visual: Color (red=low, yellow=medium, green=high)
  • Verbal: “I’m quite sure” vs. “I’m uncertain”
  • Behavioral: “Flag for review” automatically for low confidence

2. Make It Easy to Correct

Users will encounter AI mistakes. Make correction effortless.

Pattern: One-Click Feedback

AI output: "Category: Billing"
[✓ Correct] [✗ Wrong - actually: ___]

Rather than:

AI output: "Category: Billing"
Does this look correct? [Yes] [No]
[If No] What's the correct category?
  [ ] Billing
  [ ] Technical Support
  [ ] Refunds
  [etc.]

Why single-click works:

  • Removes friction (more people correct wrong answers)
  • System learns from corrections
  • Shows users their feedback matters

3. Explain When It Matters

Users don’t always need explanations, but they should have access to them.

Pattern: Explainability on Demand

Email categorized: URGENT
[Why? ▼]

When expanded:
- Contains keywords: "broken", "doesn't work", "urgent"
- Sender has history of urgent tickets
- Similar emails were previously urgent

When to offer explanation:

  • Always when confidence is medium (60-80%)
  • For high-stakes decisions
  • For decisions users disagree with
  • When it helps users refine their input

When explanation isn’t needed:

  • Simple straightforward cases (user knows why)
  • Time-sensitive decisions (users will trust you)
  • High-confidence predictions (might undermine confidence with poor explanation)

4. Provide Real-Time Feedback

Users want immediate feedback on whether AI is working.

Pattern: In-Context Feedback

[Processing...]
→ System is analyzing your document
→ Found 3 key topics
→ Generating summary...
✓ Done! Here's the summary:

Rather than silent processing and sudden output.

Benefits:

  • Users know something is happening
  • Helps calibrate expectations (this is complex, may take 30 seconds)
  • Users can cancel if they change their mind
  • Transparency builds trust

5. Design for Graceful Failure

AI sometimes fails. Design for it gracefully.

Pattern: Fallback Behaviors

For auto-routing email:

  • AI is 90% confident → auto-route
  • AI is 50% confident → route to queue with note
  • AI can’t decide → human review

For suggested content:

  • If generated text is poor quality → hide suggestion
  • If AI times out → “I’m taking longer than expected; try simpler prompt”
  • If no good matches → “I couldn’t find anything; try different keywords”

User experience:

  • Never silently fail (user should know)
  • Provide alternative (human review, manual process)
  • Explain what went wrong (when helpful)
  • Offer retry/reset option

Common AI UX Patterns

Pattern 1: AI Suggests, Human Confirms

Use when: Quality matters, some errors acceptable, but manual work exists without AI.

UX flow:

  1. AI analyzes input
  2. Shows suggestion with confidence
  3. User sees options: [Accept] [Modify] [Reject]
  4. On accept/modify, user sees confirmation
  5. System learns from decision

Example: Lead scoring

Lead: acme@company.com

AI Assessment: High-value lead (87% confidence)
- Company size: Enterprise
- Industry: Retail
- Engagement: High

[Call lead] [Review more] [Skip]

Design considerations:

  • Make accepting easy (default state)
  • Make rejecting easy (one click)
  • Show why AI made decision
  • Learn from rejections

Pattern 2: AI Generates Variations, Human Chooses

Use when: Quality is unpredictable, user wants options, creativity matters.

UX flow:

  1. User provides input or prompt
  2. AI generates 3-5 variations
  3. User reviews and picks favorite
  4. User can edit/refine

Example: Email subject line generator

Task: Write subject line for product launch email

Generated options:
1) Introducing: The all-new [Product]
2) We're excited to launch [Product] today
3) Your [Product] is here
4) Announcing [Product]: What you need to know
5) [Product] is now available

[Pick #1] [Pick #2] ... [Generate new batch]

Design considerations:

  • Show multiple options (variety helps user choice)
  • Easy to edit winner
  • “Generate more” option for variety
  • Show why each variation (what made it different)

Pattern 3: AI Automates Routine, Flags Exceptions

Use when: 80%+ of cases are routine, complex cases need human.

UX flow:

  1. AI processes automatically when confident
  2. Exceptions flagged for human review
  3. Humans have clear signal that something unusual happened
  4. System tracks accuracy and retrains

Example: Content moderation

System processes 10,000 posts/day:
- 9,200 clearly safe → published
- 700 clearly policy violations → removed
- 100 unclear → flagged for human review

Human reviewer sees flagged posts with:
- AI assessment
- Confidence level
- Why it flagged
[Approve] [Remove]

Design considerations:

  • Clear signal for exceptions (color, badge, alert)
  • Humans understand why something was flagged
  • Easy review workflow
  • System learns from human decisions

Managing User Trust

The Trust Curve

Users’ trust in AI follows a pattern:

Phase 1: Enthusiasm (Week 1-2)

  • “This is amazing!”
  • Users trust everything
  • Risk: Over-reliance

Phase 2: Discovery (Week 2-4)

  • “Wait, sometimes it gets this wrong”
  • Trust decreases
  • Users become skeptical

Phase 3: Calibration (Week 4+)

  • “I know when to trust it and when to verify”
  • Trust stabilizes at appropriate level
  • Users use it effectively

Your job is helping users reach Phase 3 safely.

Building Appropriate Trust

Do:

  • Show accuracy honestly (don’t hide failures)
  • Have users verify important outputs initially
  • Make errors visible and learnable
  • Celebrate when users catch something AI missed
  • Acknowledge limitations openly

Don’t:

  • Oversell accuracy
  • Hide mistakes
  • Punish users for not trusting AI
  • Force use of AI over human judgment
  • Pretend AI is smarter than it is

Example communication:

“This AI is right about 85% of the time. When it’s wrong, it’s usually on edge cases. We recommend reviewing anything you’re unsure about. It gets better as you use it and provide feedback.”

Reducing Cognitive Load

Using AI shouldn’t require users to become AI experts.

Simple Interaction Patterns

Bad (cognitive heavy):

“You can adjust these parameters: temperature (0.0-1.0), top_p (0.0-1.0), max_tokens (1-4096), frequency_penalty (-2.0-2.0). What values would you like?”

Good (simple):

“How creative should the response be?” [Conservative] [Balanced] [Creative]

Behind the scenes, your system translates to appropriate parameters.

Progressive Disclosure

Bad (overwhelming): Show all 20 options at once.

Good (progressive):

  • Simple interface shows basic options
  • “Advanced options” expands for power users
  • Most users never need advanced section

Defaults and Presets

Bad:

“Configure the system” [500 parameters]

Good:

“Choose a preset” [Writing assistant] [Summarizer] [Q&A] [Custom]

  • Each preset has sensible defaults
  • Users rarely need to adjust

Designing for Errors and Edge Cases

Common Error Scenarios

1. AI produces inappropriate output

  • Examples: Biased suggestions, offensive language, confidential information
  • Prevention: Content filtering, safety checks
  • Recovery: Easy report/disable, human review

2. AI confidence is completely wrong

  • Example: 95% confident but actually wrong
  • Prevention: Calibration (test and tune confidence scores)
  • Recovery: Clear explanation, easy correction

3. User interprets output wrong

  • Example: “AI suggests budget is $50K” means AI recommends not “$50K budget is realistic”
  • Prevention: Clear labeling, context
  • Recovery: Provide explanation, education

4. AI produces inconsistent output

  • Example: Same input produces different output (randomness in model)
  • Prevention: Set temperature to 0 for deterministic tasks
  • Recovery: Show variation, let user pick

Testing for Edge Cases

Critical questions:

  • What happens if input is empty or invalid?
  • What happens if model fails to respond?
  • What if confidence is way off?
  • What if user disagrees with every suggestion?
  • What if user never provides feedback?

Test with real users:

  • Have 10-20 users use the feature
  • Explicitly ask: “When would you not trust this?”
  • Watch where they hesitate
  • Fix hesitation points

Accessibility for AI Features

AI features should work for all users.

Common Issues

Vision impairment:

  • Confidence scores shown only as color (inaccessible)
  • AI-generated images without alt text

Cognitive load:

  • Too much information at once
  • Unclear explanations

Motor impairment:

  • Requires precise clicking
  • No keyboard shortcuts

Fixes

Vision: Use color + text + icons for confidence

✓ High confidence (90%) [green indicator]
⚠ Medium confidence (70%) [yellow indicator]
✗ Low confidence (40%) [red indicator]

Cognition: Progressive disclosure, clear language

Motor: Keyboard shortcuts, large click targets

Privacy and AI

Users worry about data. Be transparent.

Privacy Communication

Be explicit:

  • “Your data is used to improve this feature”
  • “Your data is not shared with [third parties]”
  • “You can opt out of [learning] anytime”

Make it controllable:

  • Let users opt in/out of data collection
  • Let users delete their data
  • Show what data is being collected

Strategic Questions

  1. Do users know when to trust the AI? Can they tell high vs. low confidence?
  2. Is correction easy? One click to fix or five steps?
  3. Have you tested with real users? Do they react as expected?
  4. What’s your error handling story? What happens when AI fails?
  5. Does your explanation actually help? Or is it confusing jargon?

Key Takeaway: Design AI UX around transparency, trust, and simplicity. Show confidence levels. Make correction easy. Explain when it helps. Design for graceful failure. Test with real users to understand when and how they trust AI.

Discussion Prompt

For your AI feature: How will users know when it’s working vs. when it’s making mistakes? What happens when it gets it wrong? Would you trust it?