Foundations

Effective Code Completion

Lesson 3 of 4 Estimated Time 45 min

Effective Code Completion

Code completion is where most developers first experience AI-assisted coding. It feels magical the first time — you start typing something and the AI finishes your thought. But like any tool, there’s a right way and a wrong way to use it.

The difference between struggling with completion suggestions and having them feel effortless comes down to understanding how the AI sees your code and how to communicate with it effectively.

How Code Completion Works

When you trigger code completion (or have it enabled continuously), the AI assistant analyzes:

  1. Your current file — Everything you’ve typed so far
  2. Open files — Other files in your editor (context window permitting)
  3. Your project structure — File names, directories, imports
  4. Language syntax — What’s valid for Python, JavaScript, TypeScript, etc.
  5. Common patterns — What developers typically write in this situation

Based on all this, it predicts the most likely next lines of code.

The Context Window Problem

AI models have a “context window” — a maximum amount of text they can see at once. GitHub Copilot can see roughly:

  • The current file (entire file, prioritizing lines near your cursor)
  • Last few files you looked at
  • Recently used imports and definitions

It cannot see:

  • Files not in your active editor
  • Comments you deleted
  • Your git history
  • Your architecture decisions (unless documented in code)

This means completion works best when the context is obvious from your code itself.

Writing Comments That Guide Completions

The single most effective technique for getting good completions is writing clear, descriptive comments. Comments are your direct channel to communicate intent to the AI.

Comment-Driven Development

Instead of starting with a function signature, start with what you want:

# Bad: No guidance for AI
def calculate_order_total(items):


# Good: Specific guidance
def calculate_order_total(items):
    # Given a list of dicts with 'price' and 'quantity' keys,
    # return the subtotal, tax (8%), and final total as a dict


# Better: Pseudocode helps AI understand
def calculate_order_total(items):
    # 1. Calculate subtotal from items
    # 2. Apply 8% sales tax
    # 3. Return dict with subtotal, tax, total

When you leave the cursor after this comment and wait, Copilot will likely generate:

def calculate_order_total(items):
    # 1. Calculate subtotal from items
    # 2. Apply 8% sales tax
    # 3. Return dict with subtotal, tax, total
    subtotal = sum(item['price'] * item['quantity'] for item in items)
    tax = subtotal * 0.08
    return {'subtotal': subtotal, 'tax': tax, 'total': subtotal + tax}

Perfect! The comment communicated exactly what you wanted.

The Three-Comment Pattern

Use this proven pattern for consistent completions:

// Pattern 1: Function purpose
// Takes an array of user IDs and returns active users from the database
async function getActiveUsers(userIds: string[]): Promise<User[]> {

// Pattern 2: Parameter validation
// Validate userIds is non-empty array of valid UUIDs
    if (!Array.isArray(userIds) || userIds.length === 0) {
        throw new Error('userIds must be non-empty array');
    }

// Pattern 3: Algorithm description
// Query database with IN clause, filter to active status, return sorted by creation date

}

Each comment level primes the AI for more specific code.

Triggering Effective Completions

Completion Triggers

There are explicit and implicit ways to trigger completion:

Explicit triggers:

  • Press Ctrl+Shift+A (VS Code, Copilot) to explicitly request suggestions
  • This is useful when the AI isn’t suggesting anything
  • Use when you’re at a point where many completions are possible

Implicit triggers:

  • Pause after typing (usually 0.5-1 second)
  • Type # or // (comment trigger)
  • Type def, function, class (declaration keywords)
  • Type = (assignment, expects a value)

Dangerous triggers (use consciously):

  • Typing common variable names like data = (AI might guess wrong)
  • Broad function signatures like def helper(x): (too vague)

The Tab Completion Pattern

Once a suggestion appears, you have options:

# Copilot suggests: return results
# If perfect:
res = fetch_data()  # [suggestion appears]
                    # Press Tab → accept entire suggestion
return res

# If close but not quite:
res = fetch_data()  # [suggestion appears]
                    # Start typing to override
return results  # Typing overrides the suggestion

Pro tip: You don’t have to accept the whole suggestion. Start typing where the suggestion diverges from what you want, and it will be overridden.

Multi-Line Suggestions

Some completions span multiple lines. These are where code completion shines.

Recognizing Multi-Line Opportunities

Multi-line suggestions work best when:

  1. The pattern is common — You’re implementing a standard algorithm
  2. The context is clear — Variable names tell a story
  3. The scope is bounded — The AI can predict the whole block

Good example for multi-line:

def process_csv_file(filename):
    """Read CSV, normalize, validate, and return records."""
    with open(filename, 'r') as f:

    # [AI sees: reading CSV, normalizing, processing]
    # Suggestion might be: entire 20-line function body

Accepting Multi-Line Suggestions

When a multi-line suggestion appears:

  1. Read it carefully — Don’t just Tab blindly
  2. Check for logic errors — Is the loop right? Off-by-one errors?
  3. Verify edge cases — What happens with empty input?
  4. Look for missing imports — Did it use something not imported?

If 80% of it is right but 20% is wrong, reject it and write that 20% manually. It’s faster than accepting and fixing.

Common Completion Patterns

Here are situations where completion works exceptionally well:

Pattern 1: Standard Loops

# This pattern is so common, AI gets it right 90%+ of the time
for item in items:
    # [cursor here - AI knows you probably want to access item properties]
    processed.append(item.upper())

Pattern 2: Error Handling

try:
    result = risky_operation()
except Exception as e:
    # [AI suggests logging and re-raising or returning None]
    logger.error(f"Operation failed: {e}")
    return None

Pattern 3: Configuration Objects

config = {
    'host': 'localhost',
    'port': 5432,
    'database': 'myapp',
    # [AI continues the pattern]
    'username': 'admin',
    'password': os.getenv('DB_PASSWORD'),
}

Pattern 4: API Response Handling

const response = await fetch(url);
// [AI knows: check status, parse JSON, handle errors]
if (!response.ok) {
    throw new Error(`API error: ${response.status}`);
}
const data = await response.json();
return data;

When Completion Fails (and What to Do)

Completion isn’t magic. It fails regularly. Understanding why and what to do is crucial.

Failure Mode 1: “It Guessed Wrong”

# You wrote:
users = database.query(User).where(active=True)

# AI suggests: .all()  [returns list, but you actually want singular user]
# You wanted: .first()

What to do:

  • Don’t accept it
  • Start typing what you want: .first() overrides the suggestion
  • The AI learns from what you type

Failure Mode 2: “Suggestion Assumes Too Much”

// You wrote:
function validateEmail(email) {

// AI suggests: entire 50-line email validation function
// You wanted: simple regex check

What to do:

  • Reject the suggestion (Escape)
  • Add a comment being more specific:
function validateEmail(email) {
    // Simple regex check for basic email format

Failure Mode 3: “Context Is Too Vague”

def process_data(d):
    # [AI has no idea what d contains or what processing means]
    # Suggestions are generic/wrong

What to do:

  • Add type hints:
def process_data(d: Dict[str, float]) -> List[float]:
    # Now AI understands the data structure
    # Suggestions improve dramatically

Failure Mode 4: “It Suggests Code You Don’t Want in Your Codebase”

# You're writing logging code
logger.info("Starting process")

# AI suggests a massive debug dump that's not your style
# AI learned from open-source code that does this

What to do:

  • This is where .cursorrules helps
  • Add to your rules file: “Use structured logging, never print debug info”
  • Create a .clinerules or .cursorrules in your project root

Advanced: Context Management for Better Completions

What You Can Do to Improve Context

1. Use clear variable names

# Bad context
for x in data:
    y.append(x + 1)

# Good context
for user in active_users:
    verified_users.append(user)

2. Use type hints (Python)

# Bad
def calculate(items):
    return sum(...)

# Good
def calculate(items: List[OrderItem]) -> float:
    return sum(...)

3. Use JSDoc/TypeScript (JavaScript)

// Bad
function transform(data) {

// Good
function transform(data: Array<DataPoint>): TransformedResult {

4. Import what you’ll use

# If you've imported pandas and numpy
import pandas as pd
import numpy as np

# AI knows you might use pd.DataFrame, np.array, etc.
# Suggestions will include these

5. Position your cursor strategically

# These are different and produce different suggestions:

# Cursor at A: AI might suggest initialization
data = get_data()
# [cursor A - AI suggests: validation/processing]

# Cursor at B: AI might suggest return
result = process(data)
return result
# [cursor B - AI suggests: logging/error handling]

Measuring Your Completion Effectiveness

After a few days of using completion, ask yourself:

  • Acceptance rate: What % of suggestions do you actually use?

    • 30-50% is normal for new users
    • 60-80% for experienced developers
  • Time saved: Are you actually faster with completion on?

    • If no, disable it temporarily and revisit later
  • Quality: Is the accepted code good quality?

    • If you’re fixing suggestions immediately, disable until you learn better patterns
  • Learning: What patterns is the AI helping you discover?

    • This is the real value — not just speed, but learning idioms

Exercises

  1. Comment-Driven Development: Take a function you wrote recently. Rewrite it starting with pseudo-code comments, then let completion suggest the implementation. Compare the result to what you originally wrote.

  2. Completion Muscle Memory: Create a new file in your preferred language. Write 5 functions using only comments as guidance, accepting completions for all implementation. Review for quality. What surprised you?

  3. Context Audit: Find a file where completion suggestions are poor. Audit it for:

    • Variable naming clarity
    • Type hints
    • Comment specificity Improve these, then see if completion suggestions improve.
  4. Pattern Recognition: For one week, keep a list of:

    • What patterns get 90%+ acceptance rate?
    • What patterns fail most often? This will inform where to trust completion vs. write manually.