*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Teaching Robots About Human Environments: A Multi-sensory Approach to Learning and Using Object Affordances
Vivian Chu
Robotics Ph.D. Student
School of Interactive Computing
College of Computing
Georgia Institute of Technology
Date: Monday, June 6th, 2016
Time: 1:00pm to 3:00pm (EST)
Location: TBD
Committee:
---------------
Dr. Andrea L. Thomaz (Co-Advisor), Department of Electrical and Computer Engineering, The University of Texas at Austin
Dr. Sonia Chernova (Co-Advisor), School of Interactive Computing, Georgia Institute of Technology
Dr. Henrik I. Christensen, School of Interactive Computing, Georgia Institute of Technology
Dr. Charles Kemp, School of Biomedical Engineering, Georgia Institute of Technology
Dr. Siddhartha Srinivasa, School of Computer Science, Carnegie Mellon University
Abstract:
-----------
For robots to operate in the real world, an unstructured environment with high levels of uncertainty, they need to be able to learn and adapt. Past work show that robots can successfully learn in situations where there is a single skill, but for a robot to truly work in environments alongside people, robots will need a framework to reason and learn throughout their lives. I propose using affordances as the foundation to provide robots with the ability to reason about action and effects, transfer knowledge, and communicate with people in novel environments. Specifically, I propose a framework to build a library of adaptable multi-sensory affordance models of the world through interaction and human guidance. This thesis will make the following contributions:
* Human-Guided Robot Self-Exploration: develop algorithms that use human guidance to enable robots to efficiently explore the environment and learn affordance models for a diverse range of manipulation tasks
* Multi-sensory Representation of Affordances: develop an affordance representation that integrates visual, haptic, and audio input
* Human-Seeded Multi-sensory Adaptable Controllers: develop a control framework for multi-sensory affordance models that enables a robot to adapt trajectories to the environment using demonstration provided by non-expert users
* End-to-end Robust Task Execution in Novel Environments: demonstrate a robotic system that utilizes existing affordance networks and task plans to execute tasks robustly in new environments.