Foundations

Context Management for Better Results

Lesson 3 of 4 Estimated Time 45 min

Context Management for Better Results

The quality of AI assistance depends entirely on the context it has access to. An AI assistant with full knowledge of your project, conventions, and requirements will generate far better code than one working from minimal information.

Context management is about deliberately feeding your AI assistant the right information at the right time.

Understanding Context Windows

Every AI model has a “context window” — the maximum amount of text it can consider when generating a response.

Context Window Sizes

ModelContext WindowIn Plain English
GPT-48,000 - 128,000 tokensCan see your entire project
Claude 3 Opus200,000 tokensCan see massive codebases
GitHub Copilot~2,000 tokensSees local file + recent context
Claude Code200,000 tokensCan see entire project structure

One token ≈ 4 characters (very rough), so a 200,000 token window can hold roughly 800,000 characters — multiple large files.

Strategic Context Usage

With limited context, you must be strategic about what you show:

Limited context (Copilot):
- The file you're currently working in
- Recent imports
- Variable names nearby

Full context (Claude Code):
- Your entire project
- All imports and dependencies
- Complete file structure
- Related functions

Knowing your tool’s context limits helps you work within them.

Project Structure as Context

How you organize your project matters. AI assistants use file structure to understand your architecture.

Good Structure (AI Understands)

src/
  api/
    routes/
      users.ts
      products.ts
    middleware/
      auth.ts
    controllers/
      userController.ts
  database/
    models/
      User.ts
      Product.ts
    migrations/
      001_create_users.ts
  services/
    userService.ts
  utils/
    validation.ts
tests/
  api/
    users.test.ts

From this structure, the AI can infer:

  • Separation of concerns (routes, controllers, services)
  • Architectural patterns (MVC)
  • Testing strategy
  • Directory purposes

Poor Structure (AI Guesses)

src/
  index.ts
  utils.ts
  database.ts
  api.ts
  models.ts
  tests.ts

Without clear organization, the AI can’t infer patterns.

Using .cursorrules and .clinerules

These files tell AI assistants about your project’s standards and patterns.

.cursorrules Example

Create at your project root:

# Cursor Rules for MyApp

## Technology Stack
- TypeScript (strict mode)
- React 18 with hooks
- Node.js/Express backend
- PostgreSQL database
- Jest for testing

## Code Style
- Use arrow functions
- Maximum 80 character line length
- Use descriptive variable names
- Always include error handling

## Architecture Patterns
- Controllers handle HTTP layer
- Services contain business logic
- Models define database structure
- Utils for reusable functions

## Naming Conventions
- Files: camelCase.ts (components), PascalCase for React components
- Functions: camelCase
- Classes: PascalCase
- Constants: UPPER_SNAKE_CASE
- Private properties: _leadingUnderscore

## Error Handling
- Use custom AppError class
- Always include status codes
- Log errors with context
- Return user-friendly messages

## Database
- Use Sequelize ORM
- Use migrations for schema changes
- Foreign key naming: tableName_id (user_id, not userId)

## Testing
- Jest for unit tests
- Supertest for API tests
- Minimum 80% code coverage
- Test files next to source files (.test.ts)

## Git Workflow
- Branch naming: feature/*, bugfix/*, docs/*
- Commit messages: conventional commits (feat:, fix:, docs:)
- Never commit secrets (.env in .gitignore)

## Files to Ignore
- node_modules/
- dist/
- .env
- .env.local

When this file exists, the AI tailors suggestions to match your rules.

Where to Put Project Rules

ToolFile NameLocation
Cursor.cursorrulesProject root
Claude Code.clinerules or .cursorrulesProject root
CopilotComment in codeAt top of files
VS Code + any AIComments in .editorconfigProject root

What Goes in Rules Files

Do include:

  • Technology stack
  • Code style preferences
  • Architectural patterns
  • Naming conventions
  • Error handling patterns
  • Git workflow
  • Files to ignore

Don’t include:

  • Specific business logic (that’s for chat)
  • Frequently changing requirements
  • Private information

Types of Context to Provide

Context Type 1: Codebase Examples

When asking the AI to follow your patterns:

"Here's how we handle database queries in this project:
[paste userService.ts example]

Here's how we validate input:
[paste validation.ts example]

Generate a new service following these patterns"

Examples teach faster than rules.

Context Type 2: File References

Different tools reference files differently:

Cursor: Use @ symbol

@src/database/models/User.ts describes the user structure.
How would I add a role field?

Claude Code: Drag and drop or upload files

[Upload entire src/models directory]

"How does the User model relate to Product?"

Copilot: Copy and paste

"Here's my middleware:
[paste code]

How would I add logging?"

Context Type 3: API/Data Contracts

When generating code that integrates with other parts:

// Show the interface the code must satisfy
interface UserRepository {
  findById(id: string): Promise<User>;
  findByEmail(email: string): Promise<User | null>;
  create(data: CreateUserInput): Promise<User>;
  update(id: string, data: UpdateUserInput): Promise<User>;
}

// Ask the AI to generate this
"Generate a PostgreSQL-based implementation of UserRepository"

Context Type 4: Business Context

Sometimes the AI needs to understand the “why” behind code:

"We're building a multi-tenant SaaS product. Each user belongs
to one organization. Data must be isolated per organization.

Given this User model: [model code]
How should I modify queries to ensure isolation?"

When Context Works Against You

Too much context can confuse the AI.

Over-Context Problem 1: Inconsistent Patterns

"Here's code from 3 years ago (bad patterns): [old code]
Here's new code (good patterns): [new code]
Generate code for new feature"

The AI might mix old and new patterns. Be explicit: “Use the patterns from the new code file only.”

Over-Context Problem 2: Legacy Code

"Here's our entire codebase (includes legacy, refactored, and new parts)
Generate code for feature X"

The AI sees conflicting patterns. Better: “Ignore the legacy/ directory, use only src/ as reference.”

Over-Context Problem 3: Unrelated Code

"Here's our codebase. Now implement blockchain payment processing"

The AI gets confused by hundreds of lines of unrelated code.

Better: “Here’s the payment processing interface you must implement: [code only]“

Solution: Selective Context

Only provide context that’s relevant:

Good: "Here are two similar features. Use their pattern."
Bad: "Here's our entire 50,000-line codebase."

Good: "Here's the database schema for this feature"
Bad: "Here's all our database schemas"

Good: "Here's one TypeScript interface to implement"
Bad: "Here's all our TypeScript interfaces"

Using Git History as Context

Your git history is excellent context about patterns:

# Show the AI recent related changes
git log --oneline -10 src/api/

# Look at how a similar feature was implemented
git show feature/auth:src/api/routes/auth.ts

# See what changed when adding a feature
git diff v1.0 v1.1 src/api/

You can copy this context into chat:

"Here's how we implemented authentication (from git history):
[git show output]

Use the same pattern for authorization"

Context Accumulation in Conversations

In chat interfaces, context accumulates. Use this to your advantage:

Conversation Structure

Message 1: "Here's our project structure and tech stack"
Message 2: "Here's the User model and UserService"
Message 3: "Generate a new endpoint"

By message 3, the AI has all context from messages 1-2.

Referencing Previous Context

You: "Can you review this code?"
[paste code]

AI: [reviews it]

You: "How does it compare to the pattern we discussed earlier?"
AI: [uses context from "earlier" - earlier in conversation]

The AI builds understanding as you chat.

When Context Drifts

After 20-30 messages, you might want to reset:

"Let's summarize where we are:
- We're building an auth system
- Using JWT tokens
- Storing in Redis
- Here's the current state of the code: [code]

Next, I need to add..."

Summarizing keeps context fresh and prevents drift.

Optimal Context for Different Tasks

Task: Code Completion

Optimal context:

  • Current file
  • Imports from your project
  • Variable definitions nearby
  • Types/interfaces

Provide: None needed — completion uses visible context automatically

Task: Function Implementation

Optimal context:

  • Function signature
  • Related functions in the file
  • Type definitions
  • Your coding style example

Provide:

"Implement this function:
function processUsers(users: User[]): ValidationResult[] {

Here's similar validation logic in our codebase: [example]
Here's our ValidationResult type: [interface]"

Task: Architectural Decision

Optimal context:

  • Project structure
  • Technology stack
  • Constraints (performance, budget, team size)
  • Business requirements
  • Related architecture decisions

Provide:

"We need to handle 10,000 concurrent users.
Tech stack: Node.js, PostgreSQL, AWS
Team size: 3 developers
Budget: $500/month infrastructure

Should we use REST, GraphQL, or gRPC? Explain tradeoffs."

Task: Debugging

Optimal context:

  • Full stack trace
  • Relevant code
  • Environment (Node v18, Python 3.10, etc.)
  • Reproduction steps
  • What you’ve already tried

Provide:

Error: ENOENT: no such file or directory

Stack: [full stack trace]

Code: [relevant code section]

Environment: Node 18.10, macOS

I've tried: [what you've attempted]

What's wrong?"

Context Caching (Advanced)

Some tools allow context caching to improve performance:

Claude API context caching:

# First request (slow, caches context)
message 1: [entire codebase context] + "analyze this"

# Second request (fast, uses cached context)
message 2: [previous cached context] + "now do this"

If you have a large context, ask for it to be cached.

Best Practices Summary

Key Takeaway: Provide the minimum context needed for the task, structured clearly. Too little context makes AI guess. Too much context makes AI confused. The right amount guides without overwhelming.

Exercises

  1. Create .cursorrules: Write a complete .cursorrules file for a project you’re working on. Include:

    • Technology stack
    • Code style
    • Architectural patterns
    • Naming conventions Test it by asking the AI for code and seeing if it follows your rules.
  2. Context Audit: Look at a recent AI-generated code that wasn’t quite right. What context was missing? Would providing it have helped?

  3. Structure Improvement: Map your current project structure. Could it be organized more clearly for AI understanding? Propose improvements.

  4. Example Collection: Find three good examples of code that follow your project’s patterns. Compile them into a “patterns” file you can reference in AI requests.

  5. Conversation Study: Have a 10-message conversation with an AI about building a feature. Track:

    • What context did you provide in messages 1-3?
    • How did understanding improve in messages 4-7?
    • Was context refresh needed after message 7?
    • Could you have been more efficient?