ML@GT Virtual Seminar: Ellie Pavlick, Brown University

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Wednesday March 24, 2021
      12:15 pm - 1:15 pm
  • Location: https://primetime.bluejeans.com/a2m/register/esbdzzaf
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact

Allie McFadden

allie.mcfadden@cc.gatech.edu

Summaries

Summary Sentence: ML@GT is hosting a virtual seminar featuring Ellie Pavlick from Brown University.

Full Summary: No summary paragraph submitted.

ML@GT is hosting a virtual seminar featuring Ellie Pavlick from Brown University.

Registration is required.

 

You can lead a horse to water...: Representing vs. Using Features in Neural NLP
 

Abstract

A wave of recent work has sought to understand how pretrained language models work. Such analyses have resulted in two seemingly contradictory sets of results. On one hand, work based on "probing classifiers" generally suggests that SOTA language models contain rich information about linguistic structure (e.g., parts of speech, syntax, semantic roles). On the other hand, work which measures performance on linguistic "challenge sets" shows that models consistently fail to use this information when making predictions. In this talk, I will present a series of results that attempt to bridge this gap. Our recent experiments suggest that the disconnect is not due to catastrophic forgetting nor is it (entirely) explained by insufficient training data. Rather, it is best explained in terms of how "accessible" features are to the model following pretraining, where "accessibility" can be quantified using an information-theoretic interpretation of probing classifiers.
 

About Ellie

Ellie Pavlick is an Assistant Professor of Computer Science at Brown University where she leads the Language Understanding and Representation (LUNAR) Lab. She received her PhD from the one-and-only University of Pennsylvania. Her current work focuses on building more cognitively-plausible models of natural language semantics, focusing on grounded language learning and on sample efficiency and generalization of neural language models.

Additional Information

In Campus Calendar
Yes
Groups

College of Computing, Computational Science and Engineering, GVU Center, Machine Learning, ML@GT, OMS, School of Computational Science and Engineering, School of Computer Science, School of Interactive Computing

Invited Audience
Faculty/Staff, Postdoc, Public, Graduate students, Undergraduate students
Categories
No categories were selected.
Keywords
No keywords were submitted.
Status
  • Created By: ablinder6
  • Workflow Status: Published
  • Created On: Dec 14, 2020 - 10:14am
  • Last Updated: Mar 9, 2021 - 11:14am