*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Stochastic Nested Composition Optimization and Beyond
Abstract:
Classical stochastic optimization models usually involve expected-value objective functions. However, they do not apply to the minimization of a composition of two or multiple expected-value functions, i.e., the stochastic nested composition optimization problem.
Stochastic composition optimization finds wide application in estimation, risk-averse optimization, dimension reduction and reinforcement learning. We propose a class of stochastic compositional first-order methods. We prove that the algorithms converge almost surely to an optimal solution for convex optimization problems (or a stationary point for nonconvex problems), as long as such a solution exists.
The convergence involves the interplay of two martingales with different timescales. We obtain rate of convergence results under various assumptions, and show that the algorithms achieve the optimal sample-error complexity in several important special cases. These results provide the best-known rate benchmarks for stochastic composition optimization. We demonstrate its application to statistical estimation and reinforcement learning. In addition, we also introduce some recent developments on nonconvex statistical optimization.
Bio:
Mengdi Wang’s research focus is stochastic data-driven optimization in machine learning, data analysis, and intelligent systems. She received her PhD from Massachusetts Institute of Technology in 2013 and became an assistant professor at Princeton in 2014. She received the Young Researcher Prize in Continuous Optimization of the Mathematical Optimization Society in 2016 (awarded once every three years).