*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Algorithms & Randomness Center (ARC)
Zhao Song (Princeton & Institute for Advanced Study)
Monday, November 14, 2020
Virtual via Bluejeans - 11:00 am
Title: Faster Optimization : From Linear Programming to Deep Learning
Abstract: Many important real-life problems, in both convex and non-convex settings, can be solved using path-following optimization methods. The running time of optimization algorithms is typically governed by two components -- the number of iterations and the cost-per-iteration. For decades, the vast majority of research effort was dedicated to improving the number of iterations required for convergence. A recent line of work of ours shows that the cost-per-iteration can be dramatically improved using a careful combination of dynamic data structures with `robust' variants of the optimization method. A central ingredient is the use of randomized linear algebra for dimensionality reduction (e.g., linear sketching) for fast maintenance of dynamic matrix problems. This framework recently led to many breakthroughs on decade-old optimization problems.
In this talk, I will present the framework underlying these breakthroughs, focusing on faster algorithms for linear programming and deep learning. We will first present how to use the above idea to speed up general LP solvers by providing an n^omega + n^{2+1/18} time algorithm. We then show how to apply similar ideas in the *non-convex* setting of deep learning. We provide both a theoretical result of a near-linear training algorithm for (overparametrized) neural networks, and an experimental application of LP techniques to speed up neural network training in practice.
----------------------------------
Videos of recent talks are available at: http://arc.gatech.edu/node/121
Click here to subscribe to the seminar email list: arc-colloq@Klauscc.gatech.edu