# ECE 510: Deep Learning Theory and Practice, Spring 2019

Meeting time: Mon/Weds 6:40-8:30 PM, Engineering Building 102
Office hours: Mon/Weds 5:30-6:30 PM (immediately before lecture), FAB 85-03

### Course Description

This course provides an introduction to the theory and practice of deep learning, with an emphasis on deep neural network-based approaches. You will gain a strong understanding of the principles of machine learning through the lens of these networks. You will get to know the most prominent models, such as convolutional and recurrent neural networks, along with topics that are the subject of current research, such as representation learning and deep generative models. As a student, you can expect to learn the concepts, methods, and techniques necessary to put deep learning to work in modern applications.

### Lecture Notes

• Lecture 1 (pdf)
• Lecture 2 (pdf)
• Lecture 3 (pdf)
• Lecture 4 (pdf)
• Lecture 5 (pdf)
• Lecture 6 (pdf)
• Lecture 7 (pdf)
• Lecture 8 (pdf)
• Lecture 9 (pdf)
• Lecture 10 (pdf)
• Lecture 11 (pdf)
• Lecture 12 (pdf)
• Lecture 13 (pdf)
• Lecture 14 (pdf)
• Lecture 15 (pdf)
• Lecture 16 (pdf)
• Lecture 17 (pdf)
• Lecture 18 (pdf)
• Lecture 19 (pdf)

### Homework

All assignments must be submitted via gradescope to obtain credit. Information on how to set up an account is included in Homework 0.

I provide the $\LaTeX$ file used to generate each homework below. Please feel free to use this as a starting point for typing up your assignment.

• Homework 0, Due: April 8, 2019 (pdf) (tex)
• Homework 1, Due: April 16, 2019 (pdf) (tex)
• Homework 2, Due: April 23, 2019 (pdf) (tex)
• Homework 3, Due: May 6, 2019 (pdf) (tex)
• Homework 4, Due: May 22, 2019 (pdf) (tex)

### Project

The project description and template are now available.

• Please email me your groups and selected topic by Friday, May 24th.
• I will read your project description once and give feedback as long as you get it to me by 11:59 PM on June 1, 2019.

### Files for Python Tutorials

• Tutorial 1 PLA (ipynb)
• Tutorial 2 Linear Regression (ipynb)
• Tutorial 3 Logistic Regression (ipynb)
• Tutorial 4 MLP using SGD on MNIST (ipynb)
• Tutorial 5 Hyperparameter Tuning (ipynb)
• Tutorial 6 Convolutional Neural Networks (ipynb)

Each week, we will post readings and other media that complement the lecture material. [TAGS] are acronyms for textbooks listed under Resources below. Most of this content is available online at no cost. These are strongly recommended, but not required. Feel free to skip topics familiar topics; otherwise dive in!

• Week 1: LFD (ch. 1 on PLA), VMLS (ch. 3 on norms and inner products), PDSH (ch. 2 on Numpy; ch. 4 on Matplotlib), Getting started with conda, Jupyter Notebook: An Introduction, and Deep Learning with PyTorch: A 60 Minute Blitz
• Week 2: LFD (ch. 1.3 on feasibility of learning, 1.4 on error & noise, 3.2 on linear regression, 3.3 on gradient descent, maximum likelihood estimation, logistic regression), DLB (ch. 4 on grad descent, ch. 5 on MLE), VMLS (ch. 12 on least squares), PDSH (ch. 5 in depth on linear regression)
• Week 3: LFD (ch. 2.1 on the theory of generalization, e-chapter 7.1-7.3 on forward prop/backprop), video: "What is backpropagation really doing?", Deep Learning with PyTorch: A 60 Minute Blitz (first 3 sections)
• Week 4: LFD (ch. 2.2 on interpreting the generalization bound, ch. 4.1-4.3 on overfitting, regularization, and validation, e-chapter 7.3-7.4 on approximation and early stopping), DLB (ch. 8.1 on batch/mini-batch algorithms, ch. 11.4 on selecting hyperparameters)
• Week 5: LFD (ch. 2.3 on bias and variance), MLY (p. 41 on bias and variance in practice), DLB (ch. 5.2.2 on regularization and weight decay, 7.4 on dataset augmentation, 7.12 on dropout)
• Week 6: LFD (ch 4.3 on validation and cross validation), Stanford CS231n material ("ConvNets"), DLB (ch. 9.1-9.5 on CNNs, ch. 9.10-9.11 on the neuroscience and history)
• Week 7: Stanford CS231n material ("ConvNets"), pooling operations (Scherer et al. 2010), AlexNet (Krizhevsky et al. 2012), rectified linear units (Nair et al. 2010), momentum (Sutskever et al. 2013), DLB (ch. 7.11 on ensemble methods), VGGNet (Simonyan and Zisserman 2014), GoogLeNet (Szegedy et al. 2014)
• Week 8: ResNet (He et al. 2015), analysis of deep neural networks (Canziani et al. 2017), global average pooling and Network in Network (Lin et al. 2014)
• Week 9: DLB (Ch. 10.1-10.2 on recurrent neural networks, 10.2.2 on backpropagation through time), trucated BPTT (Williams and Peng 1990), Andrej Karpathy's blog (The Unreasonable Effectiveness of Recurrent Neural Networks), neural attention for image captioning (Xu et al. 2015), challenges with learning long-term dependencies (Bengio et al. 1994)
• Week 10: DLB (Ch. 10.7 on the challenge of long-term dependencies, 10.10 on LSTMs, 10.11 on optimization)
• Week 11: RL (Ch. 1 on RL history and background, 3 on Markov decision processes), deep RL with unsupervised auxiliary tasks (Jaderberg et al. 2016), multi-agent deep RL (Jaderberg et al. 2019), learning dexterous in-hand manipulation (OpenAI et al. 2018), deep Q-learning (Minh et al. 2015)

### Resources

• Online resources (tutorials, videos, etc.) of interest:
• $\LaTeX$: Resources listed below.