Machine Learning Center Seminar: Vidya Muthukumar – Surprises in high-dimensional (overparameterized) linear classification

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Wednesday April 6, 2022
      12:15 pm - 1:15 pm
  • Location: https://gatech.zoom.us/j/95585693675
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact

Lia Namirr
Machine Learning Center at Georgia Tech

Summaries

Summary Sentence: In this talk, we will first briefly review recent works showing that zero regularization, or fitting of noise, need not be harmful in regression tasks.

Full Summary: In this talk, we will first briefly review recent works showing that zero regularization, or fitting of noise, need not be harmful in regression tasks. Then, we will use this insight to uncover two new surprises for high-dimensional linear classification.

Media
  • Vidya Muthukumar Vidya Muthukumar
    (image/jpeg)

Abstract: Seemingly counter-intuitive phenomena in deep neural networks have prompted a recent re-investigation of classical machine learning methods, like linear models and kernel methods. Of particular focus is sufficiently high-dimensional setups in which interpolation of training data is possible. In this talk, we will first briefly review recent works showing that zero regularization, or fitting of noise, need not be harmful in regression tasks. Then, we will use this insight to uncover two new surprises for high-dimensional linear classification: 

  • least-2-norm interpolation can classify consistently even when the corresponding regression task fails, and  
  • the support-vector-machine and least-2-norm interpolation solutions exactly coincide in sufficiently high-dimensional models.  

 

These findings taken together imply that the (linear/kernel) SVM can generalize well in settings beyond those predicted by training-data-dependent complexity measures. Time permitting, we will also discuss preliminary implications of these results for adversarial robustness, and the influence of the choice of training loss function in the overparameterized regime. 

 

This is joint work with Misha Belkin, Daniel Hsu, Adhyyan Narang, Anant Sahai, Vignesh Subramanian, Christos Thrampoulidis, Ke Wang and Ji Xu. 

 

Speaker Info:  Vidya Muthukumar is an Assistant Professor in the Schools of Electrical and Computer Engineering and Industrial and Systems Engineering at Georgia Institute of Technology. Her broad interests are in game theory, online and statistical learning. She is particularly interested in designing learning algorithms that provably adapt in strategic environments, fundamental properties of overparameterized models, and fairness, accountability, and transparency in machine learning.  

Vidya received the PhD degree in Electrical Engineering and Computer Sciences from University of California, Berkeley. She is the recipient of the Adobe Data Science Research Award, Simons-Berkeley Research Fellowship (for the Fall 2020 program on "Theory of Reinforcement Learning"), IBM Science for Social Good Fellowship and a Georgia Tech Class of 1969 Teaching Fellowship for the academic year 2021-2022.  

Additional Information

In Campus Calendar
Yes
Groups

ML@GT

Invited Audience
Faculty/Staff, Postdoc, Public, Graduate students, Undergraduate students
Categories
Seminar/Lecture/Colloquium
Keywords
No keywords were submitted.
Status
  • Created By: Joshua Preston
  • Workflow Status: Published
  • Created On: Mar 23, 2022 - 2:53pm
  • Last Updated: Mar 23, 2022 - 2:54pm