Latest


2021.12.31: Corrections in the lecture notes.
2021.12.15: Added solutions in the lecture notes.
2021.11.02: Corrections in the lecture notes.
2021.10.17: Updated the lecture notes, added exercises.
2021.10.15: Correction in the handwritten notes for the second session.
2021.10.14: Uploaded Python notebook and handwritten notes for the second session.
2021.10.13: Updated the links in the lecture notes.
2021.10.12: Updated lecture notes, with links to the recordings of the first session.
2021.10.12: Uploaded Python notebook and handwritten notes for the first session.
2021.10.11: The course webpage is online.

Instructor

Clément Royer
clement.royer(at)dauphine.psl.eu

Back to the general teaching page

Advanced aspects of gradient-type methods

Optimization for Machine Learning

M2 IASD/MASH, Université Paris Dauphine-PSL, 2021-2022


Program

     In these lectures, we provide a modern perspective on gradient descent methods, through the prism of convergence rate analysis. We introduce the notion of momentum, as well as its applications in convex optimization. We also discuss the challenges posed in nonconvex optimization, and how recent results partially circumvent these issues.

Schedule

     Lecture 1/2 (10/12) Advanced gradient descent and acceleration.
     Lecture 2/2 (10/14) Nonconvex optimization and gradient descent.

Course material

     Lecture notes PDF
     Handwritten notes for the first lecture PDF
     Handwritten notes for the second lecture PDF
     Python notebook for the first session [Sources]
     Python notebook for the second session [Sources]

Materials on this page are available under Creative Commons CC BY-NC 4.0 license.
La version française de cette page se trouve ici.