Similar Tracks
Large Model Training and Inference with DeepSpeed // Samyam Rajbhandari // LLMs in Prod Conference
MLOps.community
Exploring the Latency/Throughput & Cost Space for LLM Inference // Timothée Lacroix // CTO Mistral
MLOps.community
Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM | Jared Casper
@Scale