Performance Analysis and Optimization
Performance Analysis and Optimization
Performance problems differ from bugs. Your code works correctly but runs too slowly. This requires a different debugging mindset: instead of “why doesn’t it work?”, you’re asking “why is it slow?”
AI assistants excel at performance optimization because they understand common bottlenecks and optimization techniques deeply.
Identifying Performance Problems
Where Performance Matters
Not all performance problems matter:
Good to optimize:
- API endpoint users call 1000x/day (affects revenue)
- Database query on large tables (scales with data)
- Rendering in user-facing UI (affects UX)
Not worth optimizing:
- Background job running once per day
- Rarely-called admin function
- Operation taking 100ms when 10s is acceptable
Optimize the code that matters.
How to Detect Bottlenecks
Method 1: User complaints
"The dashboard loads slowly"
→ Profile dashboard load time
→ Identify slow query
→ Optimize
Method 2: Monitoring/observability
"P95 API response time is 2 seconds"
→ Profile slow requests
→ Identify bottleneck
→ Optimize
Method 3: Local profiling
# Time the function
import time
start = time.time()
result = slow_function()
print(f"Took {time.time() - start}s")
Method 4: Code review
// Obvious performance issue:
for (let i = 0; i < list.length; i++) {
for (let j = 0; j < list.length; j++) {
// O(n²) algorithm!
}
}
Working with AI on Performance
Step 1: Establish the Baseline
Before optimizing, measure:
You: "This function processes 10,000 records.
Current performance: 45 seconds
Acceptable performance: 5 seconds
Here's the code: [code]"
AI: [Analyzes for bottlenecks]
With a baseline, you can measure improvement.
Step 2: Identify the Bottleneck
AI is good at spotting obvious bottlenecks:
# O(n²) algorithm in O(n) problem
def find_duplicates(list):
result = []
for item in list:
for other in list: # Nested loop = O(n²)
if item == other:
result.append(item)
return result
Ask:
"This function is slow for large lists. Here's the code:
[paste code]
What's the bottleneck?"
AI identifies:
- Nested loop making it O(n²)
- Set lookup would be O(n)
- Could use a hash set instead
Step 3: Profile the Actual Code
For complex performance issues, profile:
Python profiling:
import cProfile
import pstats
cProfile.run('function_to_profile()', 'stats')
stats = pstats.Stats('stats')
stats.sort_stats('cumulative').print_stats(10)
Ask the AI:
"Here's my profiling output:
[paste profiling results]
What's using the most time?"
Node.js profiling:
console.time('operation');
// code to profile
console.timeEnd('operation');
Common Optimization Patterns
Pattern 1: Replace Nested Loops with Set/Dictionary
Slow:
def has_common_elements(list1, list2):
for item1 in list1:
for item2 in list2:
if item1 == item2:
return True
return False
# O(n²) complexity
Fast:
def has_common_elements(list1, list2):
set1 = set(list1)
return any(item in set1 for item in list2)
# O(n) complexity
Ask AI:
"This nested loop is slow. Convert to use a set for O(n) performance.
Current code: [code]"
Pattern 2: Add Database Indexes
Slow:
SELECT * FROM users WHERE email = 'test@example.com';
Without an index, the database scans every row.
Fast:
CREATE INDEX idx_users_email ON users(email);
SELECT * FROM users WHERE email = 'test@example.com';
Ask AI:
"This query is slow on large tables. What index would help?
Query: [SQL]
Table schema: [schema]"
Pattern 3: Lazy Loading / Pagination
Slow:
def get_all_users():
return User.query.all() # Loads all 1,000,000 users
Fast:
def get_users(page=1, per_page=50):
return User.query.paginate(page, per_page)
Ask AI:
"This endpoint returns all records. For scalability, implement
pagination. Here's my endpoint: [code]"
Pattern 4: Caching
Slow:
def get_user_profile(user_id):
# Expensive database query every time
return db.query_complex_user_profile(user_id)
Fast:
@functools.lru_cache(maxsize=1000)
def get_user_profile(user_id):
return db.query_complex_user_profile(user_id)
Ask AI:
"This function is called frequently with the same arguments.
Add caching. Current code: [code]"
Pattern 5: Batch Operations
Slow:
for user in users:
db.save_user(user) # Individual saves, O(n) queries
Fast:
db.save_users_batch(users) # Single query
Ask AI:
"This loop makes one database query per user (1000 queries).
Convert to batch operation. Current code: [code]"
Performance Analysis Request Template
Function/Query:
[code to optimize]
Current Performance:
- Time: X seconds
- For: N items
- Acceptable: Y seconds
Constraints:
- Memory limit: Z MB
- Must maintain: [data consistency, order, etc.]
- Can't use: [specific libraries]
Example Usage:
[how this function is typically called]
Help:
What's the bottleneck? What are optimization options?
Big O Complexity Analysis
AI can help you understand algorithmic complexity:
You: "Is this algorithm efficient?
[paste algorithm]"
AI: "This is O(n²) complexity. For 10,000 items, that's
100,000,000 operations. That's why it's slow.
You could optimize to O(n log n) by: [suggestion]"
Common Complexities
| O Notation | Example | 1,000 items | 10,000 items |
|---|---|---|---|
| O(1) | Hash lookup | 1 op | 1 op |
| O(log n) | Binary search | 10 ops | 13 ops |
| O(n) | Linear search | 1,000 ops | 10,000 ops |
| O(n log n) | Merge sort | 10,000 ops | 130,000 ops |
| O(n²) | Nested loop | 1M ops | 100M ops |
| O(2ⁿ) | Brute force | Too slow | Too slow |
Ask the AI to identify complexity and suggest improvements.
Memory Optimization
Sometimes the problem isn’t speed, it’s memory:
You: "This function uses 500MB for 10,000 items.
That's 50KB per item!
Code: [code]"
AI: "You're building a list in memory with each iteration.
You could process items one at a time instead.
Or use a generator to avoid storing all at once.
Here's an optimized version: [code]"
Optimization Workflow
Step 1: Establish Metrics
Before optimization:
- Response time: 2000ms
- Memory: 200MB
- CPU usage: 85%
- Goal: <500ms, <50MB, <20% CPU
Step 2: Profile
Profile the code to identify where time is spent:
- 80% in database queries
- 15% in JSON serialization
- 5% in business logic
Step 3: Optimize Top Bottleneck
Focus on database queries (80% of time).
Ask AI: "How can I optimize these queries?
[paste slow queries]"
Step 4: Measure
After optimization:
- Response time: 1200ms (40% improvement)
- Still need more optimization
Step 5: Optimize Next Bottleneck
Now optimize JSON serialization (24% of remaining time).
Repeat until acceptable.
Step 6: Verify
Final metrics:
- Response time: 400ms ✓ (better than 500ms goal)
- Memory: 30MB ✓ (better than 50MB goal)
- CPU: 15% ✓ (better than 20% goal)
Real-World Example: Optimizing Slow Dashboard
Scenario: Dashboard takes 30 seconds to load
Step 1: Profile to find bottleneck
- SQL query: 25 seconds (83%)
- JSON serialization: 3 seconds (10%)
- Rendering: 2 seconds (7%)
Step 2: Ask AI about slow SQL
"This query is slow. Here's the query: [SQL]
Here's the schema: [schema]"
AI: "You're doing a full table scan.
Add index on created_date.
Use LIMIT to paginate results.
Join instead of N+1 queries"
Step 3: Implement suggestions
- Add index
- Paginate (load 100 items, not 100,000)
- Fix N+1 queries
Step 4: Measure again
- SQL query: 2 seconds (from 25 seconds!)
- Total: 7 seconds (from 30 seconds!)
- Much better, but still slow
Step 5: Optimize next bottleneck (now JSON serialization at 43% of time)
Ask: "How do I optimize serialization?
[code]"
Step 6: Measure final
- Total: 3 seconds (10x improvement!)
When to Stop Optimizing
Don’t optimize forever. Stop when:
- Performance is acceptable — User sees <2 second load
- Law of diminishing returns — Each optimization gives <10% improvement
- Cost exceeds benefit — Optimization takes longer than speedup saves
- Other issues are bigger — Fixing bugs > optimizing 20% slow feature
Exercises
-
Bottleneck Identification: Find a slow function in your code. Measure it. Ask AI to identify bottlenecks. What did it find?
-
Complexity Analysis: Pick a function and ask AI to analyze its Big O complexity. Is it efficient for typical use cases?
-
Optimization Before/After: Find a performance bottleneck. Ask AI for optimization suggestions. Implement the top suggestion. Measure improvement.
-
Profiling Analysis: Profile a slow function. Ask AI to interpret the profiling output. What’s using the most resources?
-
Scalability Review: Pick a function that processes lists. Ask AI: “How would this perform with 1M items? Where would it break?” Plan optimizations.