*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Abstract: Seemingly counter-intuitive phenomena in deep neural networks have prompted a recent re-investigation of classical machine learning methods, like linear models and kernel methods. Of particular focus is sufficiently high-dimensional setups in which interpolation of training data is possible. In this talk, we will first briefly review recent works showing that zero regularization, or fitting of noise, need not be harmful in regression tasks. Then, we will use this insight to uncover two new surprises for high-dimensional linear classification:
These findings taken together imply that the (linear/kernel) SVM can generalize well in settings beyond those predicted by training-data-dependent complexity measures. Time permitting, we will also discuss preliminary implications of these results for adversarial robustness, and the influence of the choice of training loss function in the overparameterized regime.
This is joint work with Misha Belkin, Daniel Hsu, Adhyyan Narang, Anant Sahai, Vignesh Subramanian, Christos Thrampoulidis, Ke Wang and Ji Xu.
Speaker Info: Vidya Muthukumar is an Assistant Professor in the Schools of Electrical and Computer Engineering and Industrial and Systems Engineering at Georgia Institute of Technology. Her broad interests are in game theory, online and statistical learning. She is particularly interested in designing learning algorithms that provably adapt in strategic environments, fundamental properties of overparameterized models, and fairness, accountability, and transparency in machine learning.
Vidya received the PhD degree in Electrical Engineering and Computer Sciences from University of California, Berkeley. She is the recipient of the Adobe Data Science Research Award, Simons-Berkeley Research Fellowship (for the Fall 2020 program on "Theory of Reinforcement Learning"), IBM Science for Social Good Fellowship and a Georgia Tech Class of 1969 Teaching Fellowship for the academic year 2021-2022.