AI Frameworks

Master the frameworks that power modern AI. 50 topics covering deep learning (PyTorch, TensorFlow, JAX, MLX), LLM/RAG (LangChain, LlamaIndex, DSPy, HuggingFace), distributed training (DeepSpeed, FSDP, Megatron, NeMo, Ray), classical ML (sklearn, XGBoost, LightGBM, pandas, Polars), specialized libraries (RAPIDS, PyG, spaCy, OpenCV, YOLO), and MLOps (MLflow, W&B, ClearML, DVC, Kubeflow, Ray).

50 Topics
300 Lessons
6 Categories
100% Free

The AI Frameworks track exists because the honest answer to "which framework should I use" has changed three times in the last five years, and because the cost of picking the wrong one (migration, retraining, team retraining, rewriting production code) is high. We cover PyTorch, TensorFlow, JAX, Keras, HuggingFace Transformers, LangChain, LlamaIndex, LangGraph, and the orchestration layers that sit above them, with attention to where each is strong and where it tends to disappoint teams.

What is useful about a framework comparison is rarely the feature list; it is the failure modes. Does the framework make it easy to profile and debug? Does it version well? Is the community active enough that a weird error has already been answered? Is the library small enough to reason about, or is it a stack of abstractions that hides the interesting parts? We apply that lens to every framework in the track so you can make the choice that will not haunt you in 18 months.

All Topics

50 topics organized into 6 categories spanning the full AI framework landscape.

Deep Learning Frameworks

🔥

PyTorch Mastery

Master PyTorch end-to-end. Learn tensors, autograd, nn.Module, DataLoader, torch.compile, distributed training, and the patterns that ship 80% of production AI today.

6 Lessons
🧠

TensorFlow / Keras 3

Master TensorFlow 2.x and Keras 3 (multi-backend). Learn the functional API, tf.data, tf.function, distributed strategies, and TFLite for mobile/edge deployment.

6 Lessons

JAX Mastery

Master JAX: XLA-compiled NumPy with autodiff. Learn jit, grad, vmap, pmap, sharding, and the patterns that power Google's most ambitious AI research.

6 Lessons
🌡

Flax (JAX Neural Networks)

Master Flax: the most popular JAX neural network library. Learn nnx (new) and linen (old) APIs, training loops, and porting PyTorch models to Flax.

6 Lessons
🎯

Equinox (JAX)

Master Equinox: PyTorch-style neural networks for JAX with full pytree compatibility. Learn modules, filtered jit, and the patterns for clean JAX research code.

6 Lessons

PyTorch Lightning

Master PyTorch Lightning: organize PyTorch code into LightningModule, Trainer, and DataModule. Get distributed training, mixed precision, and checkpointing for free.

6 Lessons
🎯

fastai

Master fastai: high-level deep learning library on top of PyTorch. Learn DataBlock, Learner, callbacks, and the patterns that make state-of-the-art accessible.

6 Lessons
🍏

Apple MLX

Master Apple MLX: array framework optimized for Apple Silicon. Learn unified memory, lazy evaluation, MLX-LM, and the patterns for fast AI on M-series chips.

6 Lessons
🏹

PaddlePaddle (Baidu)

Master PaddlePaddle: Baidu's open-source deep learning framework. Learn dynamic and static graphs, PaddleNLP, PaddleOCR, and the deployment story.

6 Lessons
🧠

MindSpore (Huawei)

Master MindSpore: Huawei's open-source AI framework. Learn the AI native graph compiler, Ascend-optimized training, and when MindSpore beats alternatives.

6 Lessons

LLM & RAG Frameworks

🤗

HuggingFace Transformers

Master HuggingFace Transformers: 1M+ pretrained models with one API. Learn AutoModel, AutoTokenizer, pipelines, Trainer, and the patterns for production HF use.

6 Lessons
🎨

HuggingFace Diffusers

Master HuggingFace Diffusers: state-of-the-art diffusion models for image, video, and audio generation. Learn pipelines, schedulers, and custom training.

6 Lessons
🔧

HuggingFace PEFT (LoRA, QLoRA)

Master HuggingFace PEFT: parameter-efficient fine-tuning. Learn LoRA, QLoRA, prefix tuning, prompt tuning, IA3, and the patterns for cheap LLM fine-tunes.

6 Lessons
🧶

HuggingFace TRL (RLHF, DPO)

Master HuggingFace TRL: train LLMs with reinforcement learning. Learn SFTTrainer, DPOTrainer, PPOTrainer, KTOTrainer, and the alignment training patterns.

6 Lessons

HuggingFace Accelerate

Master HuggingFace Accelerate: distributed training that 'just works'. Learn DeepSpeed/FSDP integration, mixed precision, and zero-code-change distributed launches.

6 Lessons
🔗

LangChain

Master LangChain: the most popular LLM application framework. Learn LCEL, chains, retrievers, memory, and the patterns for production LangChain apps.

6 Lessons
🤭

LlamaIndex

Master LlamaIndex: the data framework for LLM apps. Learn ingestion, indexing, query engines, agents, and the patterns for production RAG with LlamaIndex.

6 Lessons
🔮

DSPy

Master DSPy: program LLMs declaratively, then optimize. Learn signatures, modules, optimizers (BootstrapFewShot, MIPRO), and the algorithmic prompt-engineering pattern.

6 Lessons
🌿

Haystack

Master Haystack: production-ready LLM framework from deepset. Learn pipelines, components, document stores, and Haystack 2.0's modular architecture.

6 Lessons

Semantic Kernel (Microsoft)

Master Microsoft Semantic Kernel: SDK for integrating LLMs into C#, Python, Java apps. Learn plugins, planners, and Microsoft's enterprise AI integration patterns.

6 Lessons

Training & Distributed

🚀

DeepSpeed

Master Microsoft DeepSpeed: train trillion-parameter models. Learn ZeRO stages, optimizer offload, pipeline parallelism, and the patterns for memory-efficient training.

6 Lessons
🔥

PyTorch FSDP

Master PyTorch FSDP (Fully Sharded Data Parallel). Learn auto-wrap policies, mixed precision, activation checkpointing, and FSDP2 (per-parameter).

6 Lessons
🧠

Megatron-LM (NVIDIA)

Master Megatron-LM: NVIDIA's framework for training huge language models. Learn tensor parallelism, sequence parallelism, and the patterns powering frontier-scale training.

6 Lessons
🎬

NVIDIA NeMo

Master NVIDIA NeMo: end-to-end framework for conversational AI, ASR, TTS, and LLMs. Learn NeMo 2.0, NeMo Curator, NeMo Aligner, and production patterns.

6 Lessons

Ray Train

Master Ray Train: distributed training on Ray. Learn TorchTrainer, integration with FSDP/DeepSpeed, fault tolerance, and the patterns for elastic training.

6 Lessons
🎧

Composer (MosaicML)

Master MosaicML Composer (now Databricks): efficient PyTorch training with algorithmic speedups. Learn Trainer, algorithms (e.g., ALiBi, EMA), and the deterministic recipes.

6 Lessons
🧶

ColossalAI

Master ColossalAI: efficient large-scale model training with auto-parallelism. Learn the Booster API, ZeRO, tensor parallelism, and ColossalChat for RLHF.

6 Lessons
🦎

Axolotl

Master Axolotl: YAML-driven LLM fine-tuning framework. Learn config-driven training, LoRA/QLoRA recipes, and the patterns for fast iteration on 7B-70B models.

6 Lessons

Classical ML

Specialized Frameworks

MLOps & Experimentation

Why an AI Frameworks Track?

The right framework lets you focus on the problem; the wrong one becomes the problem.

🔥

Deep Learning Stack

PyTorch, TensorFlow, JAX, MLX, Flax, Equinox, Lightning, fastai, PaddlePaddle, MindSpore.

🤗

LLM Ecosystem

HuggingFace Transformers, Diffusers, PEFT, TRL, Accelerate; LangChain, LlamaIndex, DSPy, Haystack, Semantic Kernel.

🚀

Distributed Training

DeepSpeed, FSDP, Megatron-LM, NeMo, Ray Train, Composer, ColossalAI, Axolotl.

📊

MLOps & More

Classical ML (sklearn, XGBoost, LightGBM, pandas, Polars), specialized (RAPIDS, PyG, spaCy, OpenCV, YOLO), MLOps (MLflow, W&B, ClearML, DVC, Kubeflow, Ray).