PhD Proposal by De’Aira Bryant

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Tuesday January 31, 2023
      8:00 am - 10:00 am
  • Location: REMOTE
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Development of a Bias-Aware Algorithm for the Analysis and Perception of Children’s Facial Expressions by Autonomous Agents

Full Summary: No summary paragraph submitted.

Title: Development of a Bias-Aware Algorithm for the Analysis and Perception of Children’s Facial Expressions by Autonomous Agents

 

Date: Tuesday, January 31, 2023

Time: 8:00am – 10:00am

 

Location: Zoom Link

Meeting ID: 936 4850 8890

Passcode: 576950

 

De’Aira Bryant

Computer Science Ph.D. Student

School of Interactive Computing

Georgia Institute of Technology

 

Committee:

Ayanna Howard (Advisor) – School of Interactive Computing, Georgia Institute of Technology / College of Engineering, Ohio State University

Charles Isbell – School of Interactive Computing, Georgia Institute of Technology

Sonia Chernova –  School of Interactive Computing, Georgia Institute of Technology

Jason Borenstein –  School of Public Policy, Georgia Institute of Technology

Tom Williams –  Department of Computer Science, Colorado School of Mines

 

Abstract

The field of human-robot interaction (HRI) has made great strides toward designing autonomous agents that operate in real-world environments with humans. Potential applications span the sectors of education, healthcare, hospitality, and more. These agents are often complex intelligent systems that utilize various perception algorithms to interact with their environment. Prior work in artificial intelligence (AI) has shown that human-centered perception algorithms are susceptible to producing biased output, thus leading to ethical concerns around fairness, privacy, and safety. The measurement and mitigation of bias in AI systems have since risen to be among the most important challenges in recent computer science history. Yet, very little work has examined bias with respect to autonomous agents or how the perpetuation of bias via autonomous agents affects HRI. 

 

This thesis examines the effects of human, data, and algorithmic bias on HRI through the lens of facial expression recognition (FER) and presents bias-aware techniques that will facilitate more effective HRI. First, we analyze human bias through an examination of normative expectations for expressive autonomous agents when considering factors like robot race, gender, and embodiment. These expectations help inform the design processes needed to develop effective agents. Next, we investigate data and algorithmic bias in FER systems for populations with scarce data. Subsequently, we develop improved techniques for modeling facial expression perception and benchmarking FER algorithms. We then propose the application of a semi-supervised machine learning technique, self-learning, as a bias-aware strategy for FER development and present preliminary results from a pilot experiment. We further propose a validation experiment to assess human perceptions of fairness during an interaction with an autonomous agent using a standard or bias-aware FER perception algorithm, thus improving our understanding of the implications of bias for HRI.

 

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd Defense
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Jan 24, 2023 - 10:55am
  • Last Updated: Jan 24, 2023 - 10:55am