ARC Colloquium: Sumegha Garg (Princeton)

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Monday October 12, 2020 - Tuesday October 13, 2020
      11:00 am - 11:59 am
  • Location: Virtual via Bluejeans
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Extractor-based Approach to Proving Memory-Sample Lower Bounds for Learning: Virtual via Bluejeans at 11:00am

Full Summary: No summary paragraph submitted.

Algorithms & Randomness Center (ARC)

Sumegha Garg (Princeton)

Monday, October 12, 2020

Virtual via Bluejeans - 11:00 am

 

Title:  Extractor-based Approach to Proving Memory-Sample Lower Bounds for Learning

Abstract:  A recent line of work has focused on the following question: Can one prove strong unconditional lower bounds on the number of samples needed for learning under memory constraints? We study an extractor-based approach to proving such bounds for a large class of learning problems as follows.

A matrix M: A x X -> {-1,1} corresponds to the following learning problem: An unknown function f in X is chosen uniformly at random. A learner tries to learn f from a stream of samples, (a_1, b_1), (a_2, b_2) ..., where for every i, a_i in A is chosen uniformly at random and b_i = M(a_i,f).

Assume that k, l, r are such that any submatrix of M, with at least 2^{-k}|A| rows and at least 2^{-l}|X| columns, has a bias of at most 2^{-r} (extractor property). We show that any learning algorithm for the learning problem corresponding to M requires either a memory of size at least Ω(k l), or at least 2^{Ω(r)} samples.

We also extend the lower bounds to a learner that is allowed two passes over the stream of samples. In particular, we show that any two-pass algorithm for learning parities of size n requires either a memory of size Ω(n^{3/2}) or at least 2^{Ω(n^{1/2})} samples.

Joint works with Ran Raz and Avishay Tal.

----------------------------------

Speaker's Webpage

Videos of recent talks are available at: http://arc.gatech.edu/node/121

Click here to subscribe to the seminar email list: arc-colloq@Klauscc.gatech.edu

Additional Information

In Campus Calendar
No
Groups

ARC

Invited Audience
Faculty/Staff, Postdoc, Graduate students, Undergraduate students
Categories
Seminar/Lecture/Colloquium
Keywords
No keywords were submitted.
Status
  • Created By: Francella Tonge
  • Workflow Status: Published
  • Created On: Sep 3, 2020 - 10:56am
  • Last Updated: Sep 29, 2020 - 12:23pm