B Machine Learning (COMP-652 and ECSE-608)
Machine Learning (COMP-652 and ECSE-608)
Winter 2015

Lecture Schedule

Date Topic Materials
Jan.6 Introduction. Linear models. Lecture 1 slides
Bishop, Sec. 1.1
If you need to catch up on the math, a brief probability review and linear algebra and matrix calculus review from Stanford University
Jan. 8 More on linear models. Overfitting. Regularization. Lecture 2 slides

Jan.13 More on fitting machine learning models. Maximum likelihood and Bayesian perspective. Lecture 3 slides
Bishop, Sec. 3.1, 3.3
Jan. 15 Non-linear methods: Kernels Lecture 4 slides
Jan.20 Optimization in the dual space. Maximum margin classification. Lecture 5 slides
Jan. 22 Computational learning theory (part 1) Lecture 6 slides
Jan.27 Computational learning theory. Active learning Lecture 7 slides
Lecture 7 slides
Jan. 29 Mor eon active learning. COLT for regression Lecture 8 slides
Feb. 3 Learning with structured data. Introduction to graphical models via mizture models. Lecture 9 slides
Feb. 5 Representational power of directed graphical models. Inference methods Lecture 10 slides
Feb.10 Learning methods for graphical models. Latent variables. Lecture 11 slides
Feb.12 Undirected graphical models. Representational power. Lecture 12 slides
Feb.17 Inference and learning in undirected graphical models Lecture 13 slides
Feb.19 Boltzmann machines. Deep belief networks Lecture 14 slides
Feb.24 Unsupervised learning: a latent variable perspective Lecture 15 slides
Feb.26 Dimensionality reduction: PCA, kernel PCA, autoencoders. Lecture 16 slides
Mar.10 Topic modelling Lecture 17 slides
Mar.12 In-class midterm. Covers lectures until the end of February.
Mar. 17 Time series analysis. Latent variable models for time series analysis. Lecture 18 slides
Mar.19 Spectral methods for time series. Lecture 19 slides
Mar.24 Monte Carlo vs temporal-difference methods for time series. Lecture 20 slides
Mar.26 Non-parametric methods for time series (and other structured) data Lecture 21 slides
Mar.31 Reinforcement learning. Markov Decision Processes and Bellman equations Lecture 22 slides


Apr. 2 Function approximation methods for reinforcement learning Lecture 23 slides
Apr. 7 The problem of optimal control. Exploration-exploitation trade-off. Value-based methods. Lecture 24 slides
Apr. 9 Policy search methods Lecture 25 slides