Foundations

Communicating AI to Stakeholders

Lesson 4 of 4 Estimated Time 40 min

Communicating AI to Stakeholders

The Communication Challenge

There’s a language gap between AI capability and business reality. Your technical team might talk about “transformer architectures” and “hallucination mitigation.” Meanwhile, your CEO wonders if AI will give you competitive advantage, and your customer-facing teams worry about AI replacing their jobs.

Your job as a leader is translating between these worlds—explaining what AI actually can and can’t do in terms that matter to different audiences, managing expectations realistically, and building genuine buy-in rather than AI hype that leads to disappointment.

Understanding Your Audience

Different stakeholders care about different things. Your communication strategy must address their specific concerns and incentives.

The Executive Team

What they care about: Strategic advantage, ROI, competitive risk, and time to impact.

What confuses them: Technical details, overoptimistic timelines, and unclear business value.

How to communicate:

  • Lead with business outcomes: “This could reduce customer support costs by 30% and improve response time by 50%”
  • Acknowledge the timeline: “We’re looking at a 12-week pilot to assess feasibility”
  • Frame competitive risk: “Our largest competitor launched an AI assistant last month. Here’s how we’d respond.”
  • Be honest about what’s unclear: “We believe this is possible but have genuine uncertainty about implementation cost”
  • Connect to existing strategy: “This supports our goal of improving customer experience at scale”

Avoid:

  • Overpromising timelines (“We’ll have this live in 4 weeks” when you mean 16)
  • Technical jargon (unless they specifically want it)
  • Comparing to sci-fi AI (“This isn’t Skynet, but here’s what it actually does”)
  • Framing as “nice to have” when it’s strategic

Your Engineering Team

What they care about: Feasibility, technical challenges, impact on their day-to-day, and learning opportunities.

What confuses them: Vague requirements, unclear ROI, and constraints (budget, timeline, data access).

How to communicate:

  • Be specific about problems: “We need to reduce customer support response time from 24 hours to 2 hours for common questions”
  • Explain constraints clearly: “We have 8 weeks, a budget of $X, and access to Y historical support tickets”
  • Acknowledge technical challenges directly: “We know hallucination is a risk for product descriptions; here’s how we’ll address that”
  • Include them in solution design: “What’s your gut on whether this is solvable with existing tools?”
  • Frame learning: “This pilot will help us understand how to build AI into our platform long-term”

Avoid:

  • Oversimplification (“Just use ChatGPT”)
  • Unrealistic constraints (all-in-one timeline that doesn’t account for integration)
  • Treating them as order-takers rather than partners
  • Expecting them to solve problems they didn’t create

Product and Customer-Facing Teams

What they care about: User experience, customer impact, competitive positioning, and integration with existing workflows.

What confuses them: Technical limitations presented as features, hallucinations, and unclear user benefits.

How to communicate:

  • Lead with user benefit: “This could answer customer questions 24/7 instead of during business hours”
  • Be honest about limitations: “It’s great at answering FAQ questions but might struggle with edge cases”
  • Explain how users will react: “Some customers will love instant answers; others might want human verification”
  • Include them in design: “What do you think would make this actually useful for our customers?”
  • Focus on integration: “How does this fit into the current support workflow?”

Avoid:

  • Treating AI features as “magic” that requires no design thought
  • Overselling capability without mentioning limitations
  • Ignoring user experience challenges
  • Surprising them with AI features they haven’t designed

Your Broader Organization

What they care about: Job security, what AI means for them, and whether to embrace or resist it.

What confuses them: Conflicting messages, technical jargon, and uncertainty about personal impact.

How to communicate:

  • Be transparent: “We’re exploring AI for specific tasks. Here’s what that means for each team”
  • Be honest about changes: “This AI tool might change how we do X, and we’re thinking about what that means for everyone”
  • Lead with augmentation first: “This won’t replace customer service reps; it will help them respond faster”
  • Acknowledge concerns directly: “It’s natural to worry about job changes. Let’s talk about what we’re actually seeing”
  • Provide clarity on what’s changing: “Here’s what your job might look like with this tool”

Avoid:

  • False reassurance: “Don’t worry, AI won’t change anything” (when it will)
  • Avoiding the conversation: “This is just for the tech team” (when it affects everyone)
  • Surprise implementations: “We’ve launched AI to replace part of your job” (without discussion)
  • Dismissing concerns: “You’re just being resistant to change”

Managing the Hype-Reality Gap

The AI hype cycle is real, and it creates genuine communication challenges.

Where the Hype Comes From

Media coverage emphasizes breakthrough moments. “AI learns to do X!” sells papers. “AI does X 10% better than previous approach, with known tradeoffs” doesn’t. This creates inflated expectations.

Your job isn’t to eliminate hype—it’s to channel it toward realistic action.

Managing Up: Realistic Executive Communication

The challenge: Your CEO watched a demo of GPT-4 and now thinks you should launch an AI feature in 4 weeks.

The approach:

  • Acknowledge the excitement: “Yes, the capability is impressive. Here’s what it actually looks like in our context.”
  • Provide concrete timeline and cost: “A proper pilot takes 8 weeks and $150K. Here’s what we’ll learn.”
  • Show the risk: “The main risk is hallucination in product descriptions. Here’s our mitigation plan.”
  • Build in learning: “The pilot isn’t just about launching a feature—it’s about understanding what’s possible for us long-term.”
  • Plan for iteration: “We’ll launch v1 with significant human oversight, then expand as we learn.”

Sample message to executives:

“We’re excited about the same AI capabilities you’ve seen. Here’s how we’d responsibly explore this: an 8-week pilot focused on customer support automation. We’re estimating it could reduce support costs by 25-30% if it works, but there’s real uncertainty—that’s why we’re piloting. At week 4, we’ll make a go/no-go decision. The goal isn’t rushing to launch; it’s understanding what’s actually feasible and valuable for us.”

Managing Across: Technical Team Realism

The challenge: Your engineers are excited and want to build the perfect AI solution, which takes 6 months.

The approach:

  • Start small: “Let’s pilot with the simplest version first—v1 doesn’t need to be perfect.”
  • Set real constraints: “We have 8 weeks. What’s the minimal viable pilot?”
  • Focus on learning: “We’re not trying to build the final system. We’re trying to learn if this approach works for us.”
  • Plan for iteration: “v1 is 60% of the capability with 80% of the value. We’ll improve based on feedback.”

Sample message to technical teams:

“I know you see all the possibilities here. Let’s start with the simplest version: automated responses to 5 common support questions. Yes, we could add 20 more features, but that’s 6 months of work. Instead, let’s ship a basic version in 6 weeks, see what actually happens with users, and build from there. We’ll learn more from real usage than six months of development.”

Managing Down: Clarity for Broader Organization

The challenge: People are hearing “AI” everywhere and worried about their jobs.

The approach:

  • Be specific about what’s happening: “We’re exploring AI for handling customer support tickets. Here’s specifically what that means.”
  • Be honest about change: “Yes, this will change how support tickets are handled. Here’s how we’re thinking about that for your team.”
  • Engage people in the solution: “Help us design how this actually works. What matters most to you?”

Sample message to broader org:

“You’ve probably heard we’re exploring AI. Here’s the specific plan: We’re testing an AI tool to help draft responses to customer support emails. This isn’t about eliminating support roles—it’s about letting humans focus on complex issues and empowering support team members to handle more tickets. We’ll pilot this with a small team first. Your feedback will be critical.”

Building Buy-In Without Overselling

Real buy-in comes from demonstrating value, not from hype.

Start with Credibility Through Pilots

One successful pilot is worth 100 PowerPoint presentations about AI potential. Pick a problem where:

  • You have relevant data
  • The solution is clear (this isn’t about discovering what’s possible)
  • You can measure success quantitatively
  • The pilot takes 8-12 weeks, not 6 months
  • You can launch with real usage, not just test scenarios

Show Honest Results

When the pilot concludes, share unfiltered results:

  • What worked (and exceeded expectations)
  • What underperformed (and why)
  • What we learned
  • What we’re building on top
  • What we’re not doing (and why)

Honesty builds credibility far more than overselling results. When leaders are truthful about results, people trust the next recommendation.

Celebrate Incremental Progress

The fastest way to kill organizational AI momentum is setting expectations for transformational change, then delivering incremental improvement.

Reframe what incremental looks like:

  • “This reduced response time from 8 hours to 4 hours” ← That’s valuable
  • “This lets support team handle 20% more tickets” ← That’s valuable
  • “This prevents 30% of escalations” ← That’s valuable

These aren’t sci-fi breakthroughs, but they’re real business value that compounds.

Setting Appropriate Expectations

The Pilot Phase

“We’re running an 8-week pilot to understand if AI can help with problem X. We’re targeting a 30-50% improvement, with the primary goal being learning whether this is feasible and valuable. We’re expecting this to work better than our current approach but worse than humans at the most complex cases. We’ll know by week 8 whether to invest further.”

The v1 Launch

“We’re launching with high human involvement—AI handles the straightforward parts, humans handle everything else. We’ll monitor closely and gradually shift more to AI as we’re confident it’s working. We’re expecting 20-30% efficiency improvement in phase 1. Some users will love it; some will find it annoying. We’ll iterate based on feedback.”

The Mature Program

“We’ve been running this for 6 months. Here’s what’s working better than expected and what’s harder than we thought. We’re focusing our next phase on [specific improvement]. We’re also planning how to apply what we’ve learned to [adjacent problem].”

Handling Pushback and Skepticism

Not everyone will be enthusiastic about AI. Some skepticism is healthy.

The Job Elimination Concern

They say: “Isn’t this going to eliminate jobs?”

You say: “That’s a fair question. Here’s what we actually see: AI tools change jobs rather than eliminating them immediately. Some tasks go away; new ones appear. Our commitment is to help people transition to higher-value work, not to eliminate livelihoods without notice. As roles change, we’ll invest in training people for what’s next.”

Follow up with action: If AI will change roles, have that conversation early and authentically. Decide what the organization’s actual commitment is—and keep it.

The “This is Just Hype” Skepticism

They say: “AI has been promised for years. This will disappoint us like everything else.”

You say: “That’s fair—we’ve been disappointed before. The difference this time is that these tools actually work for specific, well-defined problems. We’re not betting the company. We’re running a small pilot on a specific problem to see if it creates value. Let’s agree to judge by results, not hype.”

The “We Can’t Trust AI” Concern

They say: “AI makes mistakes and we can’t trust it.”

You say: “You’re right—we can’t trust AI blindly. That’s why we’re not asking it to make critical decisions alone. We’re using it to help humans work faster and smarter. Humans are still the decision-makers. AI is a tool.”

The Long-Term Communication Narrative

As AI becomes more embedded in your organization, develop a consistent narrative:

The setup: “AI is changing how work gets done. We want to lead thoughtfully in using these tools while addressing real concerns about change, fairness, and impact.”

The immediate strategy: “We’re exploring AI in specific areas where it creates clear value—starting with [concrete examples]. We’re committed to transparency, to involving people in design, and to making sure the benefits are shared.”

The change management: “As roles change, we’re committing to help people learn new skills. This isn’t about doing more with fewer people; it’s about doing different work.”

The long-term vision: “Five years from now, [your domain] will look different because of AI. Our goal is making sure we’re leading that change in ways that work for our business and our people.”

Key Takeaway: Successful AI communication requires understanding your audience and addressing their specific concerns. Manage expectations ruthlessly—overselling is the fastest way to lose credibility. Use pilots to demonstrate value, celebrate incremental progress, and be honest about what didn’t work. Build buy-in through credibility, not hype.

Discussion Prompt

Who’s your hardest audience to communicate with about AI, and what specific concerns do they have? What would actually move them from skeptical to supportive?