GVU Center Brown Bag: Nithya Sambasivan—The Myopia of Model Centrism

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
Contact

gvu@cc.gatech.edu

Summaries

Summary Sentence: In this seminar, Nithya Sambasivan discusses how fundamentals of AI systems are viewed in AI development.

Full Summary: Nithya Sambasivan (Sociotechnical Researcher formerly, Research Scientist PAIR, Google Research) presents results from a multi-year inquiry and will pay particular attention to developer practices in AI systems intended for low-resource communities.

Media
  • Sambasivan Photo 2022 Sambasivan Photo 2022
    (image/jpeg)

Abstract:

AI models seek to intervene in increasingly higher stakes domains, such as cancer detection and microloan allocation. What is the view of the world that guides AI development in high risk areas, and how does this view regard the complexity of the real world? In this talk, I will present results from my multi-year inquiry into how fundamentals of AI systems—data, expertise, and fairness—are viewed in AI development. I pay particular attention to developer practices in AI systems intended for low-resource communities, especially in the Global South, where people are enrolled as labourers or untapped DAUs. Despite the inordinate role played by these fundamentals on model outcomes, data work is under-valued; domain experts are reduced to data-entry operators; and fairness and accountability assumptions do not scale past the West. Instead, model development is glamourised, and model performance is viewed as the indicator of success. The overt emphasis on models, at the cost of ignoring these fundamentals, leads to brittle and reductive interventions that ultimately displace functional and complex real-world systems in low-resource contexts. I put forth practical implications for AI research and practice to shift away from model centrism to enabling human ecosystems; in effect, building safer and more robust systems for all.

Speaker Bio:

Dr. Nithya Sambasivan is a sociotechnical researcher whose work is in solving hard, socially-important design problems impacting marginalised communities in the Global South. Her current research re-imagines AI fundamentals to work for low-resource communities. Dr. Sambasivan's work has been widely covered in venues like VentureBeat, ZDnet, Scroll.in, O’Reilly, New Scientist, State of AI report, HackerNews and more, while influencing public policy like the Indian government’s strategy for responsible AI and motivating the NeurIPS Datasets track. As a former Staff Research Scientist at Google Research, she pioneered several original, award-winning research initiatives such as responsible AI in the Global South, human-data interaction, gender equity online, and next billion users, which fundamentally shaped the company’s strategy for emerging markets, besides landing as new products affecting millions of users including in Google Station, Search, YouTube, Android, Maps & more. Dr. Sambasivan founded and managed a blueprint HCI team in Google Research Bangalore, and set up the Accra HCI team, in contexts with limited existing HCI pipelines. Simultaneously, her research has received several best paper awards at top-tier computing conferences.

Watch via BlueJeans Event: https://primetime.bluejeans.com/a2m/live-event/dbksvxcy 

Additional Information

In Campus Calendar
Yes
Groups

College of Computing, GVU Center, IPaT, School of Interactive Computing, ML@GT

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Seminar/Lecture/Colloquium
Keywords
No keywords were submitted.
Status
  • Created By: Dorie Taylor
  • Workflow Status: Published
  • Created On: Jan 27, 2022 - 1:17pm
  • Last Updated: Feb 17, 2022 - 1:28pm