Advanced
Training and Fine-tuning Large Language Models
Learn the intricacies of training LLMs including distributed training, optimization techniques, and practical fine-tuning strategies. Understand what happens at scale.
Introduction
Learn the intricacies of training LLMs including distributed training, optimization techniques, and practical fine-tuning strategies. Understand what happens at scale.
4 Lessons
35h Est. Time
4 Objectives
1 Assessment
By completing this module you will be able to:
✓ Implement distributed training across multiple GPUs and TPUs
✓ Master advanced optimization techniques like mixed precision training
✓ Understand parameter-efficient fine-tuning (LoRA, QLoRA)
✓ Implement instruction tuning and RLHF fundamentals
Lessons
Work through each lesson in order. Each one builds on the concepts from the previous lesson.
1
Pre-Training LLMs: Data and Architecture
2
Fine-Tuning with LoRA and QLoRA
3
RLHF and Alignment
4
LLM Evaluation and Benchmarking
Recommended Reading
Supplement your learning with these selected chapters from the course library.
Hands-on Large Language Models
Chapters 4-8
LLM Engineer's Handbook
Chapters 2-5
Mastering PyTorch 2e
Chapters 9-12
Module Assessment
Training and Fine-tuning Large Language Models
Question 1 of 3