*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Abstract: Robots that operate in human environments need the capability to adapt their behavior to new situations and people’s preferences while ensuring the safety of the robot and its environment. Most robots so far rely on pre-programmed behavior or machine learning algorithms trained offline with selected data. Due to the large number of possible situations robots might encounter, it becomes impractical to define or learn all behaviors prior to deployment, causing them to inevitably fail at some point in time.
Typically, experts are called in to correct the robot’s behavior and existing correction approaches often do not provide formal guarantees on the system’s behavior to ensure safety. However, in many everyday situations we can leverage the feedback from people who do not necessarily have programming or robotics experience, i.e., non-experts, to synthesize correction mechanisms that constrain the robot’s behavior to avoid failures and to encode people’s preferences on the robot’s behavior. My research explores how we can incorporate non-expert feedback in ways that ensure that the robot will do what we tell it to do, e.g., through formal synthesis.
Bio: Sanne van Waveren is a final-year Ph.D. candidate at KTH Royal Institute of Technology in Stockholm, Sweden. In her nPh.D., she explores how non-experts can correct high-level robot behavior when the robot’s plan or policy failed, and how we can encode human preferences into robot behavior while ensuring safety, e.g., through formal synthesis. Her research \ combines concepts and techniques from human-robot interaction, formal methods, and learning to develop robots that can automatically correct their behavior using human feedback.