About This Course
When prompting alone is not enough, fine-tuning allows you to customize LLMs for your specific use case. This course teaches modern fine-tuning techniques that deliver excellent results even with limited compute resources.
You will learn about supervised fine-tuning, LoRA and QLoRA for parameter-efficient training, data preparation and formatting, training with the Hugging Face Trainer API, evaluation and benchmarking, and deployment of fine-tuned models.
Hands-on projects include fine-tuning models for code generation, customer support, medical text analysis, and structured data extraction. You will learn to work with models from Llama to Mistral, using both local GPUs and cloud training platforms.