*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Teaching Robots to Learn about Objects by Interaction
Committee:
Dr. Issa, Advisor
Dr. Ha, Co-Advisor
Dr. Davenport, Co-Advisor
Dr. Vela, Chair
Dr. Chernova
Abstract: The objective of the proposed research is to develop action policies that can aid the perception of a robotic agent. Traditionally, perception in robotics has been treated as a one-way flow of information from the outside world into a robot. But this framing overlooks the fact that a robot has the capacity to interact with its environment and perceive additional information, that otherwise might not be available to it. For example, a robot can push an object, observe how it moves and estimate its mass. In addition to such latent object properties, interaction can also be used to understand a visual scene better. Real world scenes are often cluttered with a lot of objects, many of which might be partially or completely occluded. To tackle such problems, we design neural network architectures that exploit synergies between supervised training and reinforcement learning. We train policies that can maintain an internal belief about the environment and take actions that gradually improve upon it. We propose a new dataset that contains challenging cluttered arrangements of ambiguous objects that have similar shapes or textures and use it to validate the performance of our method.