PhD Defense by David Byrd

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Thursday July 15, 2021
      2:00 pm - 4:00 pm
  • Location: Atlanta, GA; REMOTE
  • Phone:
  • URL: Bluejeans
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Responsible Machine Learning: Supporting Privacy Preservation and Normative Alignment with Multi-agent Simulation

Full Summary: No summary paragraph submitted.

 

Title: Responsible Machine Learning: Supporting Privacy Preservation and Normative Alignment with Multi-agent Simulation

 

David Byrd

Ph.D. Candidate

School of Interactive Computing

Georgia Institute of Technology

 

Date: Thursday, July 15, 2021

Time: 2:00 pm to 4:00 pm (EST)

Location (remote via BlueJeans): https://bluejeans.com/5512415242

 

Committee:

Dr. Tucker Balch (advisor), School of Interactive Computing, Georgia Institute of Technology

Dr. Mark Riedl, School of Interactive Computing, Georgia Institute of Technology

Dr. Thad Starner, School of Interactive Computing, Georgia Institute of Technology

Dr. Maria Hybinette, Department of Computer Science, University of Georgia

Dr. Jonathan Clarke, Scheller College of Business, Georgia Institute of Technology

 

Abstract:

In the last decade, machine learning (ML) algorithms have been applied to complex problems with a level of success that makes them attractive to government and industry practitioners. Post hoc analysis of some such systems has raised questions concerning the responsible use of ML, particularly with regard to unintended algorithmic bias against protected groups of interest.  This has brought attention to the need that we consider the potentially disparate outcomes that arise from the use of our models and systems, but avoiding biased decision making should be the beginning, not the end, of our commitment to responsible ML. From the safety of robotic systems, to the privacy of user training data, to the ability of autonomous systems to obey the law, any kind of unintentionally-harmful outcome is worthy of attention and mitigation. 

This dissertation aims to advance responsible machine learning through multi-agent simulation (MAS). We introduce an open source, multi-domain discrete event simulation framework and use it to: (1) improve state-of-the-art privacy-preserving federated learning (PPFL) and (2) demonstrate a novel method for normatively-aligned reinforcement learning (RL) from synthetic negative examples.

After introducing our simulation framework, we implement two PPFL protocols in single-threaded MAS: a recent state of the art protocol using differential privacy and secure multiparty computation with homomorphic encryption and a new protocol incorporating oblivious distributed differential privacy.  The simulation permits us to inexpensively evaluate both protocols for model accuracy, deployed parallel running time, and resistance to collusion attacks by participating parties.  We empirically demonstrate that the new protocol improves privacy with no loss of accuracy to the final shared model.

Next we address normatively-aligned learning with application to financial markets.  “Spoofing”, or illegal market manipulation through the placement of false orders, has been a topic of regulatory interest in the past several years.  We construct a realistic simulation of profit-driven but ethical agents trading through a stock exchange, introduce a spoofing agent, and learn to distinguish its action sequences from others.  Then we use the spoofing detector to train an intelligent RL trading agent that will generate profit while avoiding behaviors that resemble spoofing.

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd Defense
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Jul 6, 2021 - 4:22pm
  • Last Updated: Jul 6, 2021 - 4:22pm