*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
TITLE: First-order Methods for Smooth Convex Optimization with Inexact Oracle: Classical, Fast and Intermediate Gradient Methods
SPEAKER: Olivier Devolder, CORE, Université catholique de Louvain
ABSTRACT:
In this talk, we analyze the effect on first-order methods of smooth convex optimization (classical and fast gradient methods) if only inexact first-order information is available. We introduce a new notion of approximate first-order oracle.
For each method, we develop complexity results and study the link between the desired accuracy on the objective function and the needed accuracy for the oracle. We obtain that in this inexact case, the superiority of the fast gradient method over the classical one is no more so clear. The optimal scheme suffers from an accumulation of errors contrarily to the classical gradient method and the choice between these two kinds of methods depends on the complexity of the computation of the inexact first-order informations.
We prove that this phenomenon of error accumulation is an intrinsic property of any first-order method with optimal convergence rate. More precisely, we show that there is a clear link between the rate of convergence of a first-order method and the lowest possible rate of errors accumulation that we can expect.
Motivated by this result, we develop a whole family of first-order methods with intermediate rates of convergence and intermediate rates of error accumulation between the classical gradient and the fast gradient methods. We show how these new intermediate first-order methods can be used in order to accelerate the minimization of a smooth convex function when only inexact first-order information is available.
We present applications of our results to the smoothing technique, the augmented Lagrangian method, the Moreau-Yosida regularization and to non-smooth convex problems.
This is joint work with François Glineur and Yurii Nesterov