
Part 1 of 3
Lecture 1: Micrograd & Backpropagation
Build an autograd engine from scratch. Covers derivatives, computation graphs, the chain rule, and training a multi-layer perceptron with gradient descent.
February 15, 2026
Watch workshopHands-on video workshops with interactive Jupyter notebooks. Sign in to watch and code along.

Build an autograd engine from scratch. Covers derivatives, computation graphs, the chain rule, and training a multi-layer perceptron with gradient descent.
February 15, 2026
Watch workshop
Build a character-level language model. Covers bigram statistics, maximum likelihood estimation, softmax, and training a neural network to predict the next character.
February 22, 2026
Watch workshop
Build a GPT model from scratch using character-level bigram, trigram, and multi-layer perceptron models. Covers word embeddings, softmax, cross-entropy loss, backpropagation, and hyperparameter tuning.
March 1, 2026
Watch workshop