Machine Learning Seminar

CS 570
Computer Science Department, Portland State University

Winter 2017

Topic for Winter Term 2017: Adversarial Learning

Time: Thursdays, 4:00-5:30pm

Location: FAB 88-03

Instructor: Melanie Mitchell, FAB 115-13, (503) 720-2412, e-mail
Office hours: Tuesdays and Thursdays, Noon-1pm, or by appointment

Course Mailing list:

Course description: This course is a one-credit graduate seminar for students who have already taken a course in Machine Learning. Students will read and discuss recent papers in the Machine Learning literature. Each student will be responsible for presenting at least one paper during the term. This one-credit course will be offered each term, and students may take it multiple times. CS MS students who take this course for three terms may count it as one of the courses for the "Artificial Intelligence and Machine Learning" masters track requirement.

Prerequisites: CS 445/545 or permission of the instructor.

Textbook: None. We will read recent papers from the literature.

Course Work and Homework: One or more papers will be assigned per week for everyone in the class to read, along with a list of questions about the paper(s) that each student needs to answer before the following class. Each week one or more students will be assigned as discussion leaders for the week's papers.

Schedule for Winter Term 2017: This will be progressively filled in during the term.



Discussion Leader(s)



Jan. 12

Snow day!

Jan. 19

Introduction; Adversarial examples

Melanie and Anthony

Szegedy et al., Intriguing properties of neural networks

Nguyen et al., Deep neural networks are easily fooled

No written questions for first class.

Jan. 26

Attacks with adversarial examples

Erik and Thomas

Papernot et al., Practical Black-Box Attacks against deep learning systems using adversarial examples

Sharif et al., Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition

Question Set 1

Feb. 2

Hypotheses about Adversarial Examples


Guest speaker: Josh Staker

Goodfellow et al., Explaining and harnessing adversarial examples

Question Set 2

Feb. 9

Defense against Adversarial Examples Robert, Wesley

Kurakin et al., Adversarial machine learning at scale

Luo et al., Foveation-based mechanisms alleviate adversarial examples

Question Set 3

Feb. 16

Adversarial Examples in Non-Image Modalities Noah, Henry

Carlini et al., Hidden voice commands

Papernot et al., Crafting adversarial input sequences for recurrent neural networks

Question Set 4

Feb. 23

Generative Adversarial Networks Dale, Devin

Goodfellow et al., Generative adversarial networks

Im et al., Generating images with recurrent adversarial networks

No questions this week.

March 2

GANs, continued Mike, Shiran

Nguyen et al., Synthesizing the preferred inputs for neurons in neural networks via deep generator networks

Isola et al., Image-to-image translation with conditional adversarial networks

Question Set 5

March 9

Applications of Adversarial Learning Sharad, Noah

Edwards and Storkey, Censoring representations with an adversary

Li et al., Adversarial learning for neural dialogue generation

No questions this week.

March 16

"Guest" lecture Melanie On Understanding, in Humans and Machines

March 23

Applications of Adversarial Learning Sharad, Mike

Shrivastava et al., Learning from simulated and unsupervised images through adversarial training

Abadi and Anderson, Learning to protect communications with adversarial neural cryptography