ARC and Indo-US Virtual Center Seminar: Prasad Raghavendra (UC Berkeley)

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Monday April 27, 2020
      11:30 am - 12:30 pm
  • Location: Virtual via Bluejeans
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: List-Decodable Learning via Sum of Squares - Virtual via Bluejeans at 11:30am

Full Summary: No summary paragraph submitted.

Algorithms & Randomness Center (ARC) and Indo-US Virtual Center Seminar

Prasad Raghavendra (UC Berkeley)

Monday, April 27, 2020

Virtual via Bluejeans - 11:30 am

 

Title:  List-Decodable Learning via Sum of Squares

Abstract:  In the list-decodable learning setup, an overwhelming majority (say a $1-\beta$-fraction) of the input data consists of outliers and the goal of an algorithm is to output a small list $\mathcal{L}$ of hypotheses such that one of them agrees with inliers.   We devise list decodable learning algorithms for the problem of linear regression and subspace recovery using the sum of squares SDP hierarchy.

 1)  In the list-decodable linear regression problem, we are given labelled examples $\{(X_i,y_i)\}_{i \in [N]}$ containing a subset $S$ of $\beta N$ {\it inliers} $\{X_i \}_{i \in S}$ that are drawn i.i.d. from standard Gaussian distribution $N(0,I)$ in $\mathbb{R}^d$, where the corresponding labels $y_i$ are well-approximated by a linear function $\hat{\ell}$.  
 
 We devise an algorithm that outputs a list $\mathcal{L}$ of linear functions such that there exists some $\ell \in \mathcal{L}$ that is close to $\hat{\ell}$.     This yields the first algorithm for linear regression in a list-decodable setting.  Our results hold for a general distribution of examples whose concentration and anti-concentration properties can be certified by low degree sum-of-squares proofs.

 2) In the subspace recovery problem,  given a dataset where an $\alpha$ fraction (less than half) of the data is distributed uniformly in an unknown $k$ dimensional subspace in $d$ dimensions, and with no additional assumptions on the remaining data, the goal is to recover a succinct list of $O(\frac{1}{\alpha})$ subspaces one of which is close to the original subspace.  We provide the first polynomial time algorithm for the 'list decodable subspace recovery' problem.

Joint work with Morris Yau.

----------------------------------

Speaker's Webpage

Videos of recent talks are available at: https://smartech.gatech.edu/handle/1853/46836

Click here to subscribe to the seminar email list: arc-colloq@Klauscc.gatech.edu

Additional Information

In Campus Calendar
No
Groups

ARC

Invited Audience
Faculty/Staff, Postdoc, Graduate students, Undergraduate students
Categories
Seminar/Lecture/Colloquium
Keywords
No keywords were submitted.
Status
  • Created By: Francella Tonge
  • Workflow Status: Published
  • Created On: Apr 22, 2020 - 3:47pm
  • Last Updated: Apr 22, 2020 - 4:10pm