Performance analysis of approximation hierarchies for polynomial optimization
We consider the problem of minimizing a multivariate polynomial $f$ over a compact set $K$, in general a non-linear non-convex hard optimization problem. Some hierarchies of upper and lower bounds on the minimum value of $f$ are known. The $upper$ bounds are obtained by searching for a degree 2$r$ sum-of-squares polynomial density minimizing the expected value of $f$ over $K$ with respect to a given reference measure on $K$. The $lower$ bounds are obtained by searching for the largest scalar $\lambda$ for which the polynomial $f - \lambda$ has a decomposition in the quadratic module generated by the polynomial constraints defining $K$, truncated at degree 2$r$.
A fundamental question is what can be said about the quality of these bounds in terms of the relaxation order $r$. The upper bounds turn out to be easier to analyse. One can show a convergence rate in $O(log^2 r / r^2)$ when $K$ is semialgebraic and a sharper rate in $O(1/ r^2)$ for a wide subclass (including the ball, the hypercube, the sphere and the simplex). We present the main tools that permit to achieve these results (such as eigenvalue reformulation and links to roots of orthogonal polynomials). We also sketch a possible approach for analysing the lower bounds, that uses polynomial kernels and exploits the rates for the upper bounds. This approach was recently successfully applied to the unit sphere (by Fang and Fawzi, 2020) and to the Boolean hypercube.
Based on joint works with Etienne de Klerk (Tilburg) and Lucas Slot (CWI, Amsterdam).