Large Model Training and Inference with DeepSpeed // Samyam Rajbhandari // LLMs in Prod Conference
Similar Tracks
Stanford Webinar - Large Language Models Get the Hype, but Compound Systems Are the Future of AI
Stanford Online
Exploring the Latency/Throughput & Cost Space for LLM Inference // Timothée Lacroix // CTO Mistral
MLOps.community
Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM | Jared Casper
@Scale
The Emerging Toolkit for Reliable, High-quality LLM Applications // Matei Zaharia //LLMs in Prod Con
MLOps.community