PhD Defense by Reid Bishop

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Monday October 7, 2019
      9:30 am - 11:30 am
  • Location: Groseclose 202
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Developing Trust and Managing Uncertainty in Partially Observable Sequential Decision-Making Environments

Full Summary: No summary paragraph submitted.

Reid Bishop

Developing Trust and Managing Uncertainty in Partially Observable Sequential Decision-Making Environments

 

Advisor: 

Prof. Chelsea C. White III, Industrial & Systems Engineering, Georgia Institute of Technology

 

Committee:

Prof. Enlu Zhou, Industrial & Systems Engineering, Georgia Institute of Technology

Prof. Hayriye Ayhan, Industrial & Systems Engineering, Georgia Institute of Technology

Prof. He Wang, Industrial & Systems Engineering, Georgia Institute of Technology

Dr. Brandon Eames, Sandia National Laboratories

Dr. Alexander Outkin, Sandia National Laboratories

 

Date/Time/Location:

October 7, 2019 @ 9:30am EST

Groseclose 202

 

Abstract:

This dissertation consists of three distinct, although conceptually related, papers that are unified in their focus on data-driven, stochastic sequential decision-making environments, but differentiated in their respective applications. In Chapter 2, we discuss a special class of partially observable Markov decision processes (POMDPs) in which the sources of uncertainty can be naturally separated into a hierarchy of effects — controllable, completely observable effects and exogenous, partially observable effects. For this class of POMDPs, we provide conditions under which value and policy function structural properties are inherited from an analogous class of MDPs, and discuss specialized solution procedures.

           

In Chapter 3, we discuss an inventory control problem in which actions are time-lagged, and there are three explicit sources of demand uncertainty — the state of the macroeconomy, product-specific demand variability, and information quality. We prove that a base stock policy — defined with respect to pipeline inventory and a Bayesian belief distribution over states of the macroeconomy — is optimal, and demonstrate how to compute these base stock levels efficiently using support vector machines and Monte Carlo simulation. Further, we show how to use these results to determine how best to strategically allocate capital toward a better information infrastructure or a more agile supply chain.

Finally, in Chapter 4, we consider how to generate trust in so-called development processes, such as supply chains, certain artificial intelligence systems, and maintenance processes, in which there can be adversarial manipulation and we must hedge against the risk of misapprehension of attacker objectives and resources. We show how to model dynamic agent interaction using a partially-observable Markov game (POMG) framework, and present a heuristic solution procedure, based on self-training concepts, for determining a robust defender policy. 

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd Defense
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Sep 24, 2019 - 3:10pm
  • Last Updated: Sep 24, 2019 - 3:10pm