Ph.D. Thesis Proposal: Christopher Simpkins

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Tuesday May 8, 2012 - Wednesday May 9, 2012
      5:00 pm - 6:59 pm
  • Location: KACB 1116W
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact

Christopher Simpkins

Summaries

Summary Sentence: Integrating Reinforcement Learning into a Programming Language

Full Summary: No summary paragraph submitted.

Ph.D. Thesis Proposal Announcement

Title: Integrating Reinforcement Learning into a Programming Language

Christopher Simpkins
School of Interactive Computing
Georgia Institute of Technology

 

Date:     8 May 2012 (revised)
Time:     1:00 - 3:00 pm (revised)
Location: Klaus 1116W (revised)


Committee:

  • Professor Charles Isbell, School of Interactive Computing (Advisor)
  • Dr. Douglas Bodner, Tennenbaum Institute Professor
  • Mark Riedl, School of Interactive Computing
  • Dr. Spencer Rugaber, School of Computer Science
  • Professor Andrea Thomaz, School of Interactive Computing


Abstract:
My Thesis: Integrating modular reinforcement learning (MRL) into a programming language supports adaptive agent software engineering. There are three claims implied in this thesis statement: (1) there is a such thing as MRL in a software engineering sense, (2) integrating MRL into a programming language is feasible, and (3) integrating MRL into a programming language is useful to software engineers writing adaptive software agents.

Modular reinforcement learning decomposes a reinforcement learning agent into components that solve subproblems of the total problem faced by an agent.  Hierarchical reinforcement learning (HRL), which decomposes problems temporally into subtasks, is well developed.  MRL, which decomposes problems into concurrent subproblems, is still nascent.  Existing approaches to MRL are not modular in a software engineering sense because inter-component reward coupling prevents reuse.  This dissertation will demonstrate the reward coupling problem and contribute a solution in the form of a reformulation of MRL and an algorithm that implements it.

Our goal is to support practical software engineering.  The best way to support software engineering is with practical, usable programming languages.  This dissertation will contribute a programming language, implemented as a Scala library and asosciated idioms and design patterns, called AFABL -- A {Friendly|Flexible} Adaptive Behavior Language -- that integrates MRL, making MRL useful to software engineers writing practical adaptive agent software.

Finally, we will apply AFABL to non-player character (NPC) programming in games and agent simulations to demonstrate its usefulness to software engineers writing adaptive software agents.  This application of AFABL to practical software engineering problems will distinguish AFABL from previous work in integrating RL into programming languages such as ALisp.

Additional Information

In Campus Calendar
No
Groups

College of Computing, School of Interactive Computing

Invited Audience
No audiences were selected.
Categories
No categories were selected.
Keywords
No keywords were submitted.
Status
  • Created By: Jupiter
  • Workflow Status: Published
  • Created On: Apr 23, 2012 - 7:14am
  • Last Updated: Oct 7, 2016 - 9:58pm