Deep Learning DIY
Module 0
-
Software installation
Module 1
-
Introduction & General Overview
Module 2a
-
PyTorch tensors
Module 2b
-
Automatic differentiation
Module 2c
-
Automatic differentiation: VJP and intro to JAX
Module 3
-
Loss functions for classification
Module 4
-
Optimization for DL
Module 5
-
Stacking layers
Module 6
-
Convolutional neural network
Module 7
-
Dataloading
Module 8a
-
Embedding layers
Module 8b
-
Collaborative filtering
Module 8c
-
Word2vec
Module 9a
-
Autoencoders
Module 9b
-
UNets
Module 9c
-
Flows
Module 10
-
Generative adversarial networks
Module 11a
-
Recurrent Neural Networks (theory)
Module 11b
-
RNN in practice
Module 11c
-
Batches with sequences in Pytorch
Module 12
-
Attention and Transformers
Module 13
-
Siamese Networks and Representation Learning
Module 14a
-
The Benefits of Depth
Module 14b
-
The Problems with Depth
Module 15
-
Dropout
Module 16
-
Batchnorm
Module 17
-
Resnets
Module 18a
-
Denoising Diffusion Probabilistic Models
Module 19
-
Zero-shot classification with CLIP
Homeworks
Homework 1
-
MLP from scratch
Homework 2
-
Class Activation Map and adversarial examples
Homework 3
-
VAE for MNIST clustering and generation
Bonus
Module
-
Intro to Julia: Autodiff with dual numbers
Module
-
Deep learning on graphs
Graph
-
Node embeddings
Graph
-
Signal processing on graphs
Graph
-
Graph embeddings and GNNs
Post
-
Spectral GCN
Post
-
Convolutions from first principles
Post
-
Invariant and equivariant networks
Graph
-
Exploiting Graph Invariants in Deep Learning
Guest Lectures
Privacy Preserving ML
-
Daniel Huynh
Module - Deep Learning on graphs (1)
Table of Contents
Node embedding
Slides
Node embedding
0:00
Introduction
2:12
Language model
5:04
Skip-gram model
8:44
Hierarchical softmax
11:19
DeepWalk
14:26
Negative sampling
19:10
node2vec
22:28
results on les Misérables
25:10
results for multi-label classification
Slides
slides
Edit this page on
Last modified: July 11, 2024. Website built with
Franklin.jl
and the
Julia programming language
.