MS Proposal by Taehwan Seo

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Tuesday October 4, 2022
      11:00 am - 2:00 pm
  • Location: Montgomery Knight 325
  • Phone:
  • URL: Zoom
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Verification of Adversarially Robust Reinforcement Learning Mechanisms in Autonomous Systems

Full Summary: No summary paragraph submitted.

Taehwan Seo
(Advisor: Prof. Kyriakos G. Vamvoudakis)

will propose a master’s thesis entitled,

Verification of Adversarially Robust Reinforcement Learning

Mechanisms in Autonomous Systems

On

 Monday, October 4 at 11:00 a.m.
Montgomery Knight 325

Or

https://gatech.zoom.us/j/93975639144?pwd=QVdSWDd6SnBJQnU4eFFQY0lBN2lXQT09

Abstract
Growing implementation of Artificial Intelligence(AI) on autonomous dynamical systems motivates the performance measurement of the AI model with verification. The performance and safety over the Cyber-Physical System(CPS) are subject to cyberattacks that intend to fail the system in operation or to interrupt the system from learning by modulation of learning data. For the safety and reliability scheme, verifying the impact of attacks on the CPS with learning system is critical. This thesis proposal focuses on proposing one verification framework of adversarially robust Reinforcement Learning (RL) policy using the software toolkit ‘VerifAI’, providing robustness measures over adversarial attack perturbations. This allows am algorithm engineer would be equipped with an RL control model verification toolbox that may be used to evaluate the reliability of any given attack mitigation algorithm and performance of nonlinear control algorithms over their objectives. For this specified work, we developed the attack mitigating RL on nonlinear dynamics by interconnection of off-policy RL and on-off adversarially robust mechanism. After that, we connected with simulation and verification toolkit for testing of both verification framework and integrated algorithm. This proposal is expected to return the sample robustness measures and analysis of the effectiveness over adversarially robust learning method. The work will describe each component in detail, interconnection method with verification toolkit, application to simulations as testbed. Possible set of simulation were also be proposed to test out the designed framework.

Committee

  • Prof. Kyriakos G. Vamvoudakis – School of Aerospace Engineering (advisor)
  • Prof. Dimitiri Mavris– School of Aerospace Engineering
  • Prof. Yongxin Chen – School of Aerospace Engineering

 

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Undergraduate students
Categories
Other/Miscellaneous
Keywords
MS Proposal
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Sep 13, 2022 - 8:53am
  • Last Updated: Sep 13, 2022 - 8:53am