Regularized, large-scale and distributed optimization
Optimization for Machine Learning
M2 IASD/MASH, Université Paris Dauphine-PSL, 2023-2024
Program
In these lectures, we introduce several concepts that can be combined with the methods seen in the rest of the course. The first two sessions describe regularization techniques, with a focus on sparsity-inducing regularization. The last two sessions are concerned with optimization methods tailored to problems with a very large number of variables, and present algorithms that are relevant to distributed data environments.
Schedule
Lecture 1/4 (Nov. 9) Proximal methods.
Lecture 2/4 (Nov. 16) Sparse optimization and LASSO.
Lecture 3/4 (Nov. 20) Coordinate descent methods.
Lecture 4/4 (Dec. 4) Distributed optimization.
Course material
Virtual board for lecture 1 (proximal methods)
PDF
Virtual board for lecture 2 (sparsity)
PDF
Python notebook on regularization
[Sources]
Virtual board for lecture 3 (coordinate descent)
PDF
Virtual board for lecture 4 (distributed optimization)
PDF
Python notebook on coordinate descent and ADMM
[Sources]
Lecture notes
PDF
Materials on this page are available under Creative Commons
CC BY-NC 4.0 license.
La version française de cette page se trouve
ici.