PhD Defense by Jun-Kun Wang

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Tuesday March 23, 2021
      1:00 pm - 2:30 pm
  • Location: Atlanta, GA; REMOTE
  • Phone:
  • URL: Bluejeans
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Understanding Modern Techniques in Optimization: Frank-Wolfe, Nesterov's Momentum, and Polyak's Momentum

Full Summary: No summary paragraph submitted.

Title: Understanding Modern Techniques in Optimization: Frank-Wolfe, Nesterov's Momentum, and Polyak's Momentum

 

Jun-Kun Wang

Ph.D. Candidate

School of Computer Science, Georgia Institute of Technology

 

Date: March 23th, 2021 (Tuesday)

Time: 1:00 PM - 2:30 PM (EST)

Location: *No Physical Location*

BlueJeans: https://bluejeans.com/601294912

 

 

Committee:

 

Dr. Jacob Abernethy (advisor) - School of Computer Science, Georgia Institute of Technology

Dr. Guanghui Lan - Industrial & System Engineering, Georgia Institute of Technology

Dr. Vidya Muthukumar - Industrial & System Engineering and School of Electrical and Computer Engineering, Georgia Institute of Technology

Dr. Richard Peng - School of Computer Science, Georgia Institute of Technology

Dr. Santosh Vempala - School of Computer Science, Georgia Institute of Technology

 

Abstract:

 

Optimization is essential in machine learning, statistics, and data science. Among the first-order optimization algorithms, the popular ones include the Frank-Wolfe method, Nesterov's accelerated methods, and Polyak's momentum. While theoretical analysis of the Frank-Wolfe method and the Nesterov's methods are available in the literature, the analysis can be quite complicated or less intuitive. Polyak's momentum, on the other hand, is widely used in training neural networks and is currently the default choice of momentum in Pytorch and Tensorflow. It is widely observed that Polyak's momentum helps to train a neural network faster, compared with the case without momentum. However, there are very few examples that exhibit a provable acceleration via Polyak's momentum, compared to vanilla gradient descent. There is an apparent gap between the theory and the practice of Polyak's momentum.

 

In the first part of this dissertation research, we develop a modular framework that can serve as a recipe for constructing and analyzing iterative algorithms for convex optimization. Specifically, our work casts optimization as iteratively playing a two-player zero-sum game. Many existing optimization algorithms including Frank-Wolfe and Nesterov's acceleration methods can be recovered from the game by pitting two online learners with appropriate strategies against each other. Furthermore, the sum of the weighted average regrets of the players in the game implies the convergence rate. As a result, our approach provides simple alternative proofs to these algorithms. Moreover, we demonstrate that our approach of ``optimization as iteratively playing a game'' leads to three new fast Frank-Wolfe-like algorithms for some constraint sets, which further shows that our framework is indeed generic, modular, and easy-to-use.

 

In the second part, we develop a modular analysis of provable acceleration via Polyak's momentum for certain problems, which include solving the classical strongly quadratic convex problems, training a wide ReLU network under the neural tangent kernel regime, and training a deep linear network with an orthogonal initialization. We develop a meta theorem and show that when applying Polyak’s momentum for these problems, the induced dynamics exhibit a form where we can directly apply our meta theorem.

 

In the last part of the dissertation, we show another advantage of the use of Polyak's momentum --- it facilitates fast saddle point escape in smooth non-convex optimization. This result, together with those of the second part, sheds new light on Polyak's momentum in modern non-convex optimization and deep learning.

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd Defense
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Mar 10, 2021 - 9:41am
  • Last Updated: Mar 10, 2021 - 9:41am