### Mathematics Colloquium

# Russel Luke

### (GĂ¶ttingen)

## "Nonconvex optimization methods versus convex relaxations: the cost of certainty"

** Date: ** |
Mon, November 4, 2013 |

** Time: ** |
17:15 |

** Place: ** |
Research II Lecture Hall |

**Abstract:** Numerical algorithms for nonconvex optimization models are often eschewed
because the usual optimality criteria around which numerical algorithms are
designed do not distinguish solutions from critical points. This issue comes into
sharp relief with what has become known as the sparsity optimization problem.
The current trend for solving this problem, sparked by a famous paper
of Candes and Tao [2005], is to use convex relaxations. Convex relaxations have
the advantage that every point satisfying the necessary optimality criteria is
also a solution to the relaxed optimization problem. This certainty comes at
the cost of imposing difficult-to-verify restrictions on the affine constraints
in order to guarantee the correspondence of solutions to the relaxed problem to
solutions to the original problem. Moreover, convex relaxations can lead to a
tremendous increase in the dimensionality of the problem.
We present a different nonconvex approach; one with the advantage that the available algorithms are simple to apply, (locally) linearly convergent, and the problem formulation stays close in spirit if not in fact to the
original problem, thus avoiding the curse of dimensionality. We also provide conditions under which fundamental algorithms applied to the nonconvex model are
globally convergent.

We hope to convince our audience that convexity is not a categorical imperative
for provablly convergent numerical algorithms.