Schedule

 

 

Location: E:1406, E-building @ LTH, Lund University (Ole Römers Väg 3)

If you are interested in following this as a PhD-course for credits (3 ECTS points) send an email to bob@control.lth.se

 

Assignments

Send solutions to bob@control.lth.se

Assignment 1: Due Friday 24/8. Use CVX to solve Problem 5.4 in Additional Exercises

Assignment 2: Due Friday 24/8. Use CVXGEN to generate code for an optimization problem of your choice. If you dont have a specific choice, do it for a model predictive control problem.

Assignment 3: Due Friday 31/8 @ 12am. ADMM Algorithm for LP feasibility

Assignment 4: Due Friday 31/8. @ 12am. Solve Problem 5.12  in Additional Exercises. Here is the needed data file ls_perm_meas_data.m

FInal meeting in the course: Friday 31/8, 13.15-14.00, seminar room M:2498

There has been 25 persons handing in solutions, check the list here

Contents

 

Lecture 1 (Overview lecture)

Convex Optimization: From Real-Time Embedded to Large-Scale Distributed

Stephen Boyd

Convex optimization has emerged as useful tool for applications that include
data analysis and model fitting, resource allocation, engineering design,
network design and optimization, finance, and control and signal processing.
After an overview, the talk will focus on two extremes: real-time embedded
convex optimization, and distributed convex optimization.   Code generation can
be used to generate extremely efficient and reliable solvers for small
problems, that can execute in milliseconds or microseconds, and are ideal for
embedding in real-time systems.  At the other extreme, we describe methods for
large-scale distributed optimization, which coordinate many solvers to solve
enormous problems.

******
Lecture 2

CVXGEN: A Code Generator for Embedded Convex Optimization

J. Mattingley and S. Boyd

In this talk, I describe CVXGEN, an automatic code generator for convex
optimization problems.  CVXGEN starts from a high level description of a
problem, described in disciplined convex programming (DCP) style, and
automatically generates library-free, flat C code ready for compilation into a
high-speed solver. CVXGEN is just as simple to use as conventional
parser-solvers like CVX or YALMIP, but creates solvers that are thousands of
times faster (often executing in milliseconds or microseconds) and suitable for
embedding.  Such solvers have potential application in many fields, including
signal processing, automatic control, machine learning and finance.  The
primary focus of the talk will be on the techniques used to make the CVXGEN
generated solvers extremely reliable and fast.

******
Lecture 3

Alternating Direction Method of Multipliers for Distributed Optimization

joint work with Neal Parikh, Eric Chu, Borja Pelleato, Jon Eckstein

Problems in areas such as machine learning and dynamic optimization on a large
network lead to extremely large convex optimization problems, with problem data
stored in a decentralized way, and processing elements distributed across a
network.  We argue that the alternating direction method of multipliers is well
suited to such problems.  The method was developed in the 1970s, with roots in
the 1950s, and is equivalent or closely related to many other algorithms, such
as dual decomposition, the method of multipliers, Douglas-Rachford splitting,
Spingarn's method of partial inverses, Dykstra's alternating projections,
Bregman iterative algorithms for $\ell_1$ problems, proximal methods, 
and others.
After briefly surveying the theory and history of the algorithm, we discuss
applications to statistical and machine learning problems such as the lasso and
support vector machines, and to dynamic energy management problems arising in
the smart grid.

******
Lecture 4

l1-norm Methods for Convex-Cardinality Problems

Stephen Boyd

We give an overview of problems involving cardinality (number of nonzero
elements), and some very effective approaches to solving them, using a
heuristic based on the $\ell_1$ norm.  
This is the basis for widely used methods in
statistics and machine learning, including LASSO, basis pursuit, compressed
sensing, and others.  We interpret the $\ell_1$ method as a relaxation of the
original cardinality problem, and explain ideas like iterated weight updates
and asymmetric relaxations.