ROI Models for AI Projects
ROI Models for AI Projects
Why Traditional ROI Doesn’t Work for AI
Traditional ROI = (Benefit - Cost) / Cost × 100%
This works for simple projects: you build a feature, it creates value, done. AI is messier. Benefits accrue over months or years. Costs change as systems scale. You don’t know if it will work until you try. You need ROI models that handle this uncertainty.
Framework 1: Cost Avoidance (The Simplest Model)
What it is: AI replaces human effort, avoiding labor costs.
When to use: When you can measure time savings directly.
How to build it:
-
Identify the task and current cost
- Task: “Classify customer support emails”
- Current approach: 3 support staff, 40% of time = 1.2 FTE
- Cost: 1.2 FTE × $60K = $72K/year
-
Estimate AI efficiency gain
- AI classification works: saves 70% of manual classification time
- New need: 1.2 × 30% = 0.36 FTE (for QA, escalations, edge cases)
- Annual savings: $72K - ($60K × 0.36) = $50.4K
-
Subtract AI costs
- API costs: $2K/year
- Infrastructure: $5K/year
- Team (half-time engineer): $50K
- Total cost: $57K
- Year 1 net benefit: $50.4K - $57K = -$6.6K
- Year 2+ net benefit: $50.4K - $7K = $43.4K
-
Calculate payback
- Year 1: -$6.6K (building investment)
- Year 2: +$43.4K (profit)
- Payback: 1.15 years
- 3-year total: -$6.6K + $43.4K + $43.4K = $80.2K
Reality check: Be honest about whether people disappear or shift to other work. Usually they shift. So you’re not saving $72K; you’re saving the cost of not hiring an additional person next year.
Framework 2: Productivity Gain (When People Do More)
What it is: AI helps people accomplish more, increasing throughput or quality without more headcount.
When to use: When the constraint is people’s ability to do work, not the number of people.
Example: Sales Team Productivity
Current state:
- 20 sales reps, each closes $400K in average contract value (ACV)
- Each rep handles ~50 deals/year
- Sales cycle: 3 months
- Reps spend 30% of time on admin, research, data entry
- Opportunity cost: 20 reps × 30% × $400K = $2.4M in potential revenue at maximum utilization
With AI:
- AI handles research and initial qualification
- Reps focus on selling
- Improved utilization: 20 reps × 70% vs. 60% = +200 hours selling per rep per year
- Additional closes: 200 hours ÷ 40 hours per deal = 5 additional deals per rep
- Additional revenue: 20 reps × 5 deals × $400K = $40M
But wait—is there market to support $40M more? Usually not at scale.
More realistic:
- Reps close 10% more deals (pipeline constraint)
- Additional revenue: 20 reps × 5 deals × 10% = $4M × 30% margin = $1.2M
- AI cost: $100K setup + $200K/year
Year 1 ROI: ($1.2M - $200K) / $100K = 10x
Key point: Frame this as incremental revenue, not total possible revenue.
Framework 3: Quality/Error Reduction (Preventing Bad Outcomes)
What it is: AI reduces errors, preventing the costs of those errors.
When to use: When error correction is expensive or when errors have cascade consequences.
Example: Content Moderation
Current state:
- 10,000 user posts/day
- Manual moderation: 5% false negative rate (offensive content slips through)
- Cost: 500 offensive posts × $100 impact (user complaint, retention loss) = $50K/day = $18.25M/year
With AI:
- AI flags posts for human review
- 85% of offensive content caught by AI (15% false negative)
- Cost with AI: 1,500 × $100 = $150K/day = $54.75M/year
- Improvement: $50K - $150K = -$100K/day… this is worse!
Actually:
- AI reduces false negatives from 5% to 2%
- Cost: 200 × $100 = $20K/day = $7.3M/year
- Savings: $50K - $20K = $30K/day = $10.95M/year
- AI cost: $100K setup + $500K/year
- Year 1 ROI: ($10.95M - $500K) / $100K = 104x
But this assumes:
- Error costs are accurately estimated
- AI quality improvements actually reach 97% catch rate
- No increase in false positives (AI flags good content as bad)
Reality check: These are often overestimated. Audit your actual error costs carefully.
Framework 4: Revenue Impact (The Hardest to Prove)
What it is: AI enables new capabilities that attract customers or increase customer value.
When to use: When AI creates genuinely new capabilities or experiences.
Caution: This is the most commonly oversold and hardest to attribute.
Example: Personalization
Claim: “Personalization increases conversion by 20%”
Building the model:
- Current: 100K users, 2% conversion = 2,000 customers
- With personalization: +20% = 2,400 customers
- ARPU: $100
- Additional revenue: 400 × $100 = $40K/year
Wait, that’s way too small. Let me recalculate:
- Current: 100K users/month, 2% conversion = 2,000/month
- Annual: 2,000 × 12 = 24,000 customers
- With personalization: +20% = 28,800 customers
- Incremental: 4,800 × $100 = $480K/year
- AI cost: $150K setup + $50K/year
- Year 1 ROI: ($480K - $50K) / $150K = 2.87x
But proving this is really hard:
- Did conversions actually go up 20%?
- Or just 5%?
- Or is it placebo (users think they should convert more)?
- Would conversion have increased anyway (seasonality, marketing)?
How to prove it:
- A/B test: 50% of users get personalization, 50% don’t
- Measure over 4+ weeks (control for weekly seasonality)
- Segment analysis: Does effect work for all users or just some?
- Control for other changes: Did anything else change this period?
Without A/B testing, revenue claims are guesses.
Framework 5: Time-to-Market (Speed Value)
What it is: AI enables faster delivery, which has business value (first-mover advantage, faster iteration, faster response to market).
When to use: When speed itself creates competitive advantage or cost savings.
Example: Faster Drug Development
Current state:
- Drug discovery takes 10 years, costs $2.6B
- Each year of delay costs $260M in lost opportunity
- Timeline: 3 years discovery, 3 years preclinical, 4 years clinical trials
With AI:
- AI accelerates discovery phase: 3 years → 2 years
- Accelerates preclinical: 3 years → 2 years
- Total acceleration: 2 years to market
- Value: $260M/year × 2 years = $520M
- AI system cost: $50M development
ROI: ($520M - $50M) / $50M = 9.4x
But this is speculative:
- Does AI really save 2 years? Maybe only 1.
- Are there other bottlenecks (regulatory approval)?
- Will competitors also use AI (reducing first-mover advantage)?
Time-to-Value Framework
Rather than assuming immediate benefit, model when benefits actually appear:
Timeline Benefit Cost Cumulative
Month 0-2 $0 $30K (dev) -$30K
Month 3 $5K/month $10K/month -$55K + $5K = -$50K
Month 6 $10K/month $10K/month -$50K + $30K = -$20K
Month 9 $15K/month $10K/month -$20K + $45K = +$25K
Month 12 $15K/month $10K/month +$25K + $60K = +$85K
More realistic model accounts for:
- Ramp period before value appears (months 0-2)
- Gradual improvement as system improves (months 3-9)
- Plateauing of benefits (month 12+)
- Sustained costs even as benefits plateau
Scenario Planning for Uncertainty
AI projects have outcome uncertainty. Plan for multiple scenarios:
Optimistic Scenario
- “Everything works better than expected”
- AI achieves 90% accuracy (targeting 80%)
- Adoption is faster than expected
- Benefit: $100K/month, costs $15K/month
- Payback: 4 months
- Year 1 ROI: 340%
Expected Scenario
- “Works about as we planned”
- AI achieves 80% accuracy as targeted
- Adoption is gradual (3 months to full)
- Benefit: $50K/month (ramping), costs $15K/month
- Payback: 8 months
- Year 1 ROI: 150%
Pessimistic Scenario
- “Harder than expected”
- AI achieves 65% accuracy (not acceptable)
- Implementation delays
- Benefit: $20K/month, costs $15K/month
- Payback: 15+ months
- Year 1 ROI: 30%
Decision rule: If even the pessimistic scenario justifies investment, proceed. If only the optimistic scenario works, be cautious.
Measuring AI Success: The Metrics That Matter
Not all outcomes are created equal.
Business Metrics (What Leaders Care About)
- Time to breakeven: How long until the project becomes cash-positive?
- Total 3-year benefit: Sum of all benefits over 3 years minus all costs
- ROI: (Benefit - Cost) / Cost, at year 1 and year 3
- Payback period: How many months until cumulative benefit exceeds cost?
Operational Metrics (What Teams Care About)
- Cost per unit value: How much does it cost to create $1 of value?
- Cost per transaction: How much does it cost to process one unit?
- Processing efficiency: How much better/faster is the process?
- Team utilization: Is the team actually using this?
Product Metrics (What Users Care About)
- Accuracy/quality: Does the AI output actually work?
- User satisfaction: Do users like the feature?
- Adoption rate: What percentage of users try/use it?
- Retention: Do users keep using it or abandon it?
Common ROI Mistakes to Avoid
Mistake 1: Attributing General Improvement to Your AI
What happens: Conversion goes up 5% after you launch AI personalization. You claim the AI caused it. But overall market conditions also improved. Competitors didn’t improve. Seasonality favored you.
How to avoid: Run A/B tests. Control for seasonal and external factors. Be conservative about attribution.
Mistake 2: Counting Savings That Never Materialize
What happens: You save 1 FTE worth of work, but don’t actually reduce headcount (person shifts to other work). You count the savings anyway.
How to avoid: Only count savings that actually reduce cost or create new revenue. Shifting work has value, but it’s not the same as headcount reduction.
Mistake 3: Using Inflated Baseline Costs
What happens: Current error handling costs $100K/year. You estimate this high to inflate AI savings. But honest measurement says it’s $30K/year.
How to avoid: Audit actual costs. Use data, not estimates. Be conservative.
Mistake 4: Ignoring Ongoing Costs
What happens: You calculate year 1 ROI = 5x. But year 2 costs double because you need more infrastructure. Now ROI = 2.5x.
How to avoid: Project 3-year economics, not just year 1. Account for scaling costs.
ROI Frameworks Summary
| Framework | Best For | Key Metric | Complexity |
|---|---|---|---|
| Cost Avoidance | Automation | $ saved | Low |
| Productivity Gain | Efficiency | $ additional revenue/capacity | Medium |
| Quality Improvement | Error reduction | $ prevented losses | Medium |
| Revenue Impact | New capability | $ incremental revenue | High |
| Speed Value | Market advantage | $ from faster time | High |
Recommendation: Use cost avoidance and productivity gain for quick decisions. Use quality improvement when you have error data. Be cautious with revenue impact and speed value unless you can A/B test.
Key Takeaway: AI ROI models range from straightforward (cost avoidance) to speculative (revenue impact). Build models based on measurable, conservative assumptions. Plan for uncertainty with optimistic/expected/pessimistic scenarios. Focus on metrics you can actually measure. The best ROI model is one where you can quantify both cost and benefit after launch.
Discussion Prompt
For your priority AI initiative: Which ROI framework applies? What are your honest conservative assumptions about benefit, timeline, and cost? When would you actually break even?