*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Adaptive Gain Choice for Large-Scale Stochastic Approximation Procedures
GUEST LECTURER
Prof. Anatoli Iouditski
AFFILIATION
University Joseph Fourier, Grenoble, France
ABSTRACT
The subject of this talk is a complexity analysis of a family of large-scale stochastic approximation algorithms. The methods belongs to the family of primal-dual descent algorithms, introduced by Yu. Nesterov. We propose an adaptive choice of the gain sequences of the algorithm which make it possible to attain the optimal rates of convergence on wide classes of problems. We show, for instance, that if it is known a priori that the objective function is Lipschitz, the proposed algorithm attains minimax rate of convergence. Further, if the objective belongs to a "better class" of smooth functions with Lipschitz-continuous gradient, the proposed algorithm also attains the minimax rate.