Slides 2019

Outline

The 2019 course consists of the following topics:
lecture 1 Introduction to Continuous Optimization
   

lecture 2

 

Review of basic probability theory.

Maximum likelihood, M-estimators, and empirical risk minimization as a motivation for convex optimization.

   

lecture 3

 

Unconstrained smooth minimization I: Concept of an iterative optimization algorithm,Gradient descent.

Convergence rateCharacterization of functions.

   

lecture 4

 

Unconstrained smooth minimization II:

Accelerated gradient methods

   

lecture 5

 

 

 

Unconstrained smooth minimization III:
 
Adaptive gradient methods.

Newton’s method.

Accelerated adaptive gradient methods.

 
 

lecture 6

 

 

Stochastic gradient methods: Stochastic programming.

Stochastic gradient descent.

Variance reduction.

   

lecture 7

 

 

 

 

Optimization for Deep Learning: From convex to nonconvex optimization.

Neural networks.

Saddle points problems.

Generative Adversarial Networks.

   

lecture 8

 

Composite minimization I: Subgradient method.

Proximal and accelerated proximal gradient methods.

   

lecture 9

 

 

 

Composite minimization II: Proximal gradient method for nonconvex problems.

Proximal Newton-type methods.

Stochastic proximal gradient methods.

   

lecture 10

 

 

 

Constrained convex minimization I: The primal-dual approach.

Smoothing approaches for non-smooth convex minimization.

   

lecture 11

 

Constrained convex minimization II: The Conjugate gradient (Frank-Wolfe) method.

Stochastic CGM.