Collaborative Coding with AI
Collaborative Coding with AI
Traditional pair programming means two developers working together at one keyboard. AI pair programming means you and an AI assistant collaborating to build software. But unlike a human pair, the AI has different strengths, different weaknesses, and different decision-making processes.
The key to effective AI pair programming isn’t treating it like human pairing. It’s understanding how to lead while letting the AI contribute its strengths.
The AI Pair Programming Model
In human pair programming, you have a “driver” (coding) and a “navigator” (thinking strategically). With AI, the roles are more fluid:
You are always the driver — You control direction, make final decisions, and ensure quality.
The AI is a capable navigator — It suggests implementations, catches errors, and offers alternatives, but defers to your judgment.
This asymmetry is essential. The AI shouldn’t drive because:
- It can’t understand your team’s context and goals
- It can’t make judgment calls about tradeoffs
- It can’t know your organization’s standards
- It has no accountability for the code
But it can contribute significantly when given good direction.
When to Lead vs. When to Follow
When You Lead (Design Time)
You should drive when:
Making architectural decisions
You: "Should we use SQL or NoSQL for this feature?"
AI: "Here are the tradeoffs of each..."
You: [Decides based on team capabilities and requirements]
The AI can explain options, but you know your constraints.
Setting standards and policies
You: "We're enforcing strict type safety in this project"
AI: "Understood. All suggestions will include full TypeScript types"
The AI follows your standards once set.
Handling ambiguous requirements
You: "The requirements say 'fast' but don't specify latency targets.
Given our traffic, I'm targeting 200ms response time."
AI: "Got it. I'll optimize for that constraint"
Only you understand business context.
Deciding scope and priorities
You: "Focus on getting this feature working. Optimization comes later."
AI: [Generates working code, not optimized code]
You set priorities based on deadlines, business value, and risk.
When to Follow (Implementation Time)
Let the AI lead when:
It knows the best pattern
You: "I need a function to fetch paginated results"
AI: "Here's the standard pagination pattern for your framework..."
You: [Trusts the AI's expertise on frameworks]
Don’t overthink patterns the AI knows well.
Error handling and edge cases
You: "Parse this JSON"
AI: "I'll include error handling for malformed JSON, empty input..."
You: [Lets the AI handle edge cases it thinks of]
The AI often catches things you’d miss.
Testing strategies
You: "Here's the function"
AI: "I'll generate tests for normal case, edge cases, and error cases"
You: [Lets the AI be thorough with testing]
AI testing is often more comprehensive than what developers write.
Performance optimizations you haven’t thought of
You: "This function works but seems slow"
AI: "I see the bottleneck. Here's an optimized version..."
You: [Evaluates the optimization]
Fresh perspectives catch performance issues.
Maintaining Code Ownership
“Code ownership” means understanding the code you’re responsible for. When AI writes code, you must maintain ownership:
Code Review Step
Never accept AI-generated code without review:
# AI generated:
def process_users(users):
return [u for u in users if u['age'] > 18 and u['verified']]
# You review: Is this right?
# - Does age live in u['age'] or u.age?
# - Is verified the right check, or should it be is_verified?
# - What about None values in age?
# - Should this modify the users list or return a new one?
Always ask:
- Does this match our code style?
- Are there edge cases I should handle?
- Is there existing code I should reuse?
- Would I explain this code the same way?
Understanding What You Accept
Before accepting code:
// AI generated this middleware
const authMiddleware = (req, res, next) => {
const token = req.headers.authorization?.split(' ')[1];
if (!token) return res.status(401).send('Unauthorized');
try {
req.user = jwt.verify(token, process.env.JWT_SECRET);
next();
} catch (err) {
res.status(403).send('Invalid token');
}
};
// You should understand:
// - Why Bearer tokens use this format
// - What happens if JWT_SECRET is undefined
// - Why 401 vs 403 (401=no auth, 403=invalid auth)
// - When next() is called vs error returned
If you don’t understand the code, ask the AI to explain it. Code you don’t understand is technical debt.
Critical Reviewing AI Suggestions
The AI isn’t always right. Part of effective pairing is knowing when to accept vs. reject.
Red Flags in AI Code
Red flag 1: Unused imports
import requests
from datetime import datetime
from typing import List
def get_current_hour():
return datetime.now().hour
The imports for requests and List aren’t used. Either remove them or ask why the AI included them.
Red flag 2: Over-engineering for the scope
# You asked for: "store user preference"
# AI generated: Full cache layer, distributed locking, TTL management
# This is overkill for a simple feature
Red flag 3: Not matching your patterns
# Your codebase uses:
def validate_email(email: str) -> bool:
# AI suggests:
def isValidEmail(email): # Different naming style
Ask the AI to follow your patterns.
Red flag 4: Missing error handling you expect
# You use try/except everywhere, but AI generated:
data = json.loads(response.text) # No error handling
# This breaks your patterns
Red flag 5: Security concerns
# AI generated:
query = f"SELECT * FROM users WHERE id = {user_id}"
# This is SQL injection vulnerable!
# Should be:
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
The Question to Ask
Before accepting AI code, ask: “Would I be okay explaining this code to a code reviewer?”
If the answer is “I’d need to explain why the AI did this”, then you don’t understand it yet. Ask for clarification.
Pair Programming Workflow
Here’s a practical workflow:
Phase 1: Design (You Lead)
You: "I need an endpoint that accepts a user ID and returns
their purchase history from the last year"
AI: "Here are the design questions:
- Should we paginate results?
- Should we filter by status?
- Do we cache this data?"
You: [Answer based on your knowledge]
"Paginate by 50 items, no status filter,
cache for 1 hour"
Phase 2: Implementation (AI Suggests)
You: "Generate the endpoint code"
AI: [Provides implementation]
You: [Reviews the code]
"This looks good, but I need to handle the case
where there are no purchases"
Phase 3: Iteration (Collaborative)
You: "Can you also track the user's last purchase date?"
AI: "I'll add that field and include it in the response"
You: [Reviews]
"Perfect"
Phase 4: Testing (AI Suggests, You Verify)
You: "Write tests for this"
AI: [Generates test cases]
You: [Reviews]
"Good coverage. Run these against the code to verify"
Handling Disagreements
Sometimes you’ll disagree with the AI’s suggestion.
Bad approach:
You: "Your suggestion is wrong"
AI: "Let me try again"
[Gets worse]
Good approach:
You: "I see what you're suggesting, but I'm concerned about
the performance impact on large datasets. Can you optimize
for O(n log n) instead of O(n²)?"
AI: [Provides optimized version]
You: "That's better, but I'm still not sure. Let me write this
part myself based on our existing patterns"
It’s okay to take over. The AI is there to help, not to be right.
Building Consistency with AI
One challenge: ensuring code written by you and the AI looks cohesive.
Solution 1: Style Guide Create or enforce a style guide your AI follows:
eslint config → AI respects it
black/isort for Python → AI uses same formatting
TypeScript strict mode → AI types everything
Solution 2: Code Patterns Document
# Our Patterns
## Error Handling
We use custom AppError class:
Solution 3: Project Rules File
Create .cursorrules or .clinerules:
- Use arrow functions, not function declarations
- Always include error handling
- Write tests first for critical paths
- Use repository pattern for data access
The AI will follow these guidelines once established.
Pair Programming Challenges and Solutions
| Challenge | Solution |
|---|---|
| AI generates code in your style but not your team’s | Add style guide to project rules |
| AI over-engineers simple solutions | Be more specific: “Simple solution using built-in methods” |
| AI suggests using libraries you don’t want | Specify: “Using only standard library” |
| AI misses your team’s context | Document assumptions in .cursorrules |
| Code ownership feels unclear | Always review and explain to yourself |
| Pair becomes one-way (you just accept AI) | Push back on suggestions, think critically |
When AI Pairing Adds Most Value
AI pairing shines for:
- Boilerplate — Setup code, config files, repetitive patterns
- Test generation — Comprehensive test coverage
- Refactoring — Breaking large functions into smaller ones
- Learning — Understanding unfamiliar libraries or patterns
- Documentation — Writing docstrings, comments, READMEs
- Exploration — “What if we used this library?” quick prototypes
It’s less valuable for:
- Architecture — Requires business knowledge you have, AI doesn’t
- Trade-off decisions — Needs to weigh factors (cost vs. performance, etc.)
- Security-critical code — High liability if wrong
- Novel problems — No patterns to learn from
Real-World Example: Building a Feature Together
Scenario: Add user deactivation to your API
You: "I need a feature where users can deactivate their accounts.
Requirements: soft delete (don't remove data), prevent login,
allow reactivation within 30 days"
AI: "I'll need to understand your current:
- User model/schema
- Login flow
- Any existing soft delete patterns"
You: [Shows relevant code]
AI: [Generates migration, model updates, endpoint]
You: [Reviews]
- Migration looks good, uses your column naming
- Endpoint logic is sound
- Missing: audit logging for deactivation
You: "Add audit logging"
AI: [Updates code]
You: "Perfect. Now generate tests"
AI: [Tests cover happy path, edge cases, permission checks]
You: [Reviews tests]
- Comprehensive
- Uses your test patterns
- Ready to merge
In this scenario, both you and the AI contributed. You provided direction and context. The AI provided implementation and breadth (covering edge cases).
Exercises
-
Decision Log: For one week, track every time you lead vs. let the AI lead. Note:
- What decision you made
- Whether you led or followed
- Why you chose that approach
- If the outcome was good
-
Code Review Practice: Get AI suggestions for a feature you’re working on. Thoroughly review using the checklist:
- Does it match your code style?
- Does it handle edge cases?
- Is it secure?
- Would I be comfortable explaining it? Document your findings.
-
Disagreement Resolution: Next time you disagree with AI’s suggestion:
- Express the concern clearly
- Ask for alternatives or reasoning
- Decide: accept, request modification, or do it yourself
- Note what worked
-
Team Pairing: If you work in a team, pair with the AI on an actual feature:
- Start with design phase
- Let AI implement
- Review critically
- Iterate on feedback Ask teammates: “Does this look like code our team would write?”