PhD Defense by Vivian Chu

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Wednesday December 6, 2017 - Thursday December 7, 2017
      1:00 pm - 2:59 pm
  • Location: CCB 340
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Teaching Robots about Human Environments

Full Summary: No summary paragraph submitted.

Title: Teaching Robots about Human Environments

 

Vivian Chu

Robotics Ph.D. Candidate

School of Interactive Computing

Georgia Institute of Technology

 

Date: December 6th, 2017 (Wednesday)

Time: 1:00pm to 3:00pm (EST)

Location: CCB 340

 

Committee:

-------------------

Dr. Andrea L. Thomaz (Co-Advisor), Department of Electrical and Computer Engineering, The University of Texas at Austin 

Dr. Sonia Chernova (Co-Advisor), School of Interactive Computing, Georgia Institute of Technology 

Dr. Henrik I. Christensen, Department of Computer Science and Engineering, University of California, San Diego

Dr. Charles C. Kemp, School of Biomedical Engineering, Georgia Institute of Technology

Dr. Siddhartha Srinivasa, School of Computer Science and Engineering, University of Washington

 

 

Abstract:

-------------------

 

The real world is complex, unstructured, and contains high levels of uncertainty. To operate in such environments, robots need to learn and adapt. One such framework that allows robots to learn and adapt is to model the world using affordances. By modeling the world with affordances, robots can reason about what actions they need to take to achieve a goal. This thesis provides a framework that allows robots to learn these models through interaction and human guidance. 

 

Within robotic affordance learning, there has been a large focus on learning visual skill representations due to the difficulty of getting robots to interact with the environment. Furthermore, utilizing different modalities (e.g. touch and sound) introduces challenges such as different sampling rates and data resolution. This thesis addresses these challenges by providing several methods to interactively gather multisensory data using human guided robot self-exploration and an approach to integrate visual, haptic, and auditory data for adaptive object manipulation

 

We take a human-centered approach to tackling the challenge of robots operating in unstructured environments. The following are the contributions this thesis makes to the field of robot learning: (1) a human-centered framework for robot affordance learning that demonstrates how human teachers can guide the robot in the modeling process throughout the entire pipeline of affordance learning; (2) a human-guided robot self-exploration framework that contributes several algorithms that use human guidance to enable robots to efficiently explore the environment and learn affordance models for a diverse range of manipulation tasks; (3) a multisensory affordance model that integrates visual, haptic, and audio input; and (4) a novel control framework that allows adaptation of affordances for object manipulation that utilizes multisensory data and human-guided exploration.

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd Defense
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Nov 27, 2017 - 8:28am
  • Last Updated: Nov 27, 2017 - 8:28am