Similar Tracks
Quantization explained with PyTorch - Post-Training Quantization, Quantization-Aware Training
Umar Jamil
Mistral / Mixtral Explained: Sliding Window Attention, Sparse Mixture of Experts, Rolling Buffer
Umar Jamil
Coding a Transformer from scratch on PyTorch, with full explanation, training and inference.
Umar Jamil
Coding LLaMA 2 from scratch in PyTorch - KV Cache, Grouped Query Attention, Rotary PE, RMSNorm
Umar Jamil
LoRA: Low-Rank Adaptation of Large Language Models - Explained visually + PyTorch code from scratch
Umar Jamil