Machine Learning Center Seminar Series: Matus Telgarsky – Stochastic Linear Optimization Never Overfits

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
Contact

Kyla Hanson
Machine Learning Center
Operations & Program Manager

Summaries

Summary Sentence: In this talk, Telgarsky will present an elementary and widely-applicable truncation-based proof technique which can handle mirror descent on stochastic, batch, and Markovian data, as well as a diverse set of losses including logistic and squared losses.

Full Summary: The first half of this talk will survey a decade of implicit regularization bounds I have developed either alone or with Ziwei Ji, including coordinate and gradient descent rates for linear models, asymptotic gradient descent guarantees for deep linear and ReLU networks, and finally rates for unregularized actor-critic. All of these results require somewhat specialized settings and proof techniques. By contrast, the second half will present an elementary and widely-applicable truncation-based proof technique which can handle mirror descent on stochastic, batch, and Markovian data, as well as a diverse set of losses including logistic and squared losses. The bounds establish non-asymptotic implicit regularization along the population mirror descent path, implying low test error without any explicit projection, regularization, or strong convexity. (Paper will reach arXiv before the talk.) 

Speaker:

Matus Telgarsky
Assistant professor, University of Illinois

Location:

Virtual

Talk Details:

The first half of this talk will survey a decade of implicit regularization bounds I have developed either alone or with Ziwei Ji, including coordinate and gradient descent rates for linear models, asymptotic gradient descent guarantees for deep linear and ReLU networks, and finally rates for unregularized actor-critic. All of these results require somewhat specialized settings and proof techniques. By contrast, the second half will present an elementary and widely-applicable truncation-based proof technique which can handle mirror descent on stochastic, batch, and Markovian data, as well as a diverse set of losses including logistic and squared losses. The bounds establish non-asymptotic implicit regularization along the population mirror descent path, implying low test error without any explicit projection, regularization, or strong convexity. (Paper will reach arXiv before the talk.) 

Speaker Info: 

Matus Telgarsky is an assistant professor at the University of Illinois, Urbana-Champaign, specializing in deep learning theory. He was fortunate to receive a PhD at UCSD under Sanjoy Dasgupta. Other highlights include: co-founding, in 2017, the Midwest ML Symposium (MMLS) with Po-Ling Loh; receiving a 2018 NSF CAREER award; organizing a Simons Institute summer 2019 program on deep learning with SamyBengio, Aleskander Ma dry, and Elchanan Mossel. Since 2019 he has been a bit anti-social.

Additional Information

In Campus Calendar
No
Groups

ML@GT

Invited Audience
Faculty/Staff, Public, Undergraduate students
Categories
No categories were selected.
Keywords
No keywords were submitted.
Status
  • Created By: Joshua Preston
  • Workflow Status: Published
  • Created On: Feb 10, 2022 - 3:38pm
  • Last Updated: Feb 10, 2022 - 3:45pm