Advanced

Training and Fine-tuning Large Language Models

Learn the intricacies of training LLMs including distributed training, optimization techniques, and practical fine-tuning strategies. Understand what happens at scale.

Estimated Time 35 hours

Introduction

Learn the intricacies of training LLMs including distributed training, optimization techniques, and practical fine-tuning strategies. Understand what happens at scale.

4 Lessons
35h Est. Time
4 Objectives
1 Assessment

By completing this module you will be able to:

Implement distributed training across multiple GPUs and TPUs
Master advanced optimization techniques like mixed precision training
Understand parameter-efficient fine-tuning (LoRA, QLoRA)
Implement instruction tuning and RLHF fundamentals

Lessons

Work through each lesson in order. Each one builds on the concepts from the previous lesson.

1

Pre-Training LLMs: Data and Architecture

60 min

Start Lesson
2

Fine-Tuning with LoRA and QLoRA

55 min

Start Lesson
3

RLHF and Alignment

55 min

Start Lesson
4

LLM Evaluation and Benchmarking

55 min

Start Lesson

Recommended Reading

Supplement your learning with these selected chapters from the course library.

📖

Hands-on Large Language Models

Chapters 4-8

📖

LLM Engineer's Handbook

Chapters 2-5

📖

Mastering PyTorch 2e

Chapters 9-12

Module Assessment

Training and Fine-tuning Large Language Models

Question 1 of 3

What is LoRA (Low-Rank Adaptation) and why is it valuable for fine-tuning large models?