AI & ML
LLM Training & Inference Internals
The full lifecycle from pre-training through post-training to inference optimization. Scaling laws, RLHF pipelines, KV caching, and prompt caching economics.
Pre-training
Pre-training Deep Dive
Post-training
Post-training Pipeline
Inference Optimization
Inference Optimization & Prompt Caching
Model Compression
Model Compression: Distillation, Quantization & Pruning