*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Human-Guided Task Transfer for Interactive Robots
Tesca Fitzgerald
Ph.D. Candidate in Computer Science
School of Interactive Computing
College of Computing
Georgia Institute of Technology
Date: May 5, 2020
Time: 1:30pm-3:30pm (ET)
Location: https://cmu.zoom.us/j/99615614842
Remote-only in accordance with GT COVID-19 guidelines.
Committee
Dr. Ashok Goel (Advisor, School of Interactive Computing, Georgia Tech)
Dr. Andrea Thomaz (Advisor, Department of Electrical and Computer Engineering, UT Austin)
Dr. Sonia Chernova (School of Interactive Computing, Georgia Tech)
Dr. Henrik Christensen (Department of Computer Science and Engineering, University of California, San Diego)
Dr. Brian Scassellati (Department of Computer Science, Yale University)
Abstract
Adaptability is an essential skill in human cognition, enabling us to draw from our extensive, life-long experiences with various objects and tasks in order to address novel problems. To date, robots do not have this kind of adaptability, and yet, as our expectations of robots' interactive and assistive capacity grows, it will be increasingly important for them to adapt to unpredictable environments in a similar manner as humans. While a robot can be pre-programmed for many tasks and their variations, specifying these behaviors would require tedious effort, and still would not adequately prepare a robot for every scenario it may encounter. Rather than require more demonstration data in order to attempt generalization across these variations, we leverage continued interaction with the teacher within the context of the new target task.
This approach first requires an understanding of how task differences, interaction, and transfer are related. We define a taxonomy of transfer problems that models the relationship between task differences and information requirements for transfer. Based on this taxonomy, we analyze a particular category of transfer problems in which the target environment contains new, unfamiliar objects. We present an interactive approach that enables the robot to learn the mapping between familiar source objects and new target objects using assistance from a human teacher (provided by indicating the next object to be used at each step of the task). After a limited number of assists, our approach enables the robot to autonomously infer the objects
used to complete the remainder of the task. Furthermore, we identify the effect of noisy feedback during interaction and present a confidence-guided approach to moderating the robot's requests for assistance.
We then address a second category of transfer problems in which we replace the tool that the robot uses to manipulate other objects in the environment. For example, the robot may learn a scooping task using a spoon, and at a later time must transfer its task model to use a mug instead. We utilize interactive corrections to record the motion constraints imposed by the new tool, and then model the underlying relationship between the robot's gripper and the new tooltip. Not only do we find that corrections are sufficient for the robot to model the new constraints afforded by the tool within the context of the corrected task, but the learned model can also be reused on other tasks that provide a similar context for that tool (e.g. in the tool surfaces used to execute the task).
Overall, this work enables a robot to address a wide variety of transfer problems without extensive demonstrations or domain-specific knowledge, and thus contributes toward a future of adaptive, collaborative robots.