PhD Defense by Wenhao Yu

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Tuesday April 28, 2020 - Wednesday April 29, 2020
      2:00 pm - 3:59 pm
  • Location: REMOTE
  • Phone:
  • URL: BlueJeans Link
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Learning to Walk using Deep Reinforcement Learning and Transfer Learning

Full Summary: No summary paragraph submitted.

Title: Learning to Walk using Deep Reinforcement Learning and Transfer Learning

  

Wenhao Yu

School of Interactive Computing

College of Computing

Georgia Institute of Technology

 

 

Date: Tuesday, April 28, 2020

Time: 2:00 PM-4:00 PM (EST)

BlueJeans: https://bluejeans.com/206491397

**Note: this defense is remote-only due to the institute's guidelines on COVID-19**

 

Committee:

Dr. Greg Turk (Advisor, School of Interactive Computing, Georgia Tech)

Dr. C. Karen Liu (Advisor, School of Engineering, Stanford University / School of Interactive Computing, Georgia Tech)

Dr. Charlie Kemp (Department  of Biomedical Engineering / School of Interactive Computing, Georgia Tech)

Dr. Sergey Levine (Department  of  Electrical  Engineering and Computer Sciences, University of California, Berkeley)

Dr. Michiel van de Panne (Department of Computer Science, University of British Columbia)

 

 

 

Abstract:

In this dissertation, we seek to develop computational tools to reproduce the locomotion of humans and animals in complex and unpredictable environments. Such tools can have a significant impact in computer graphics, robotics, machine learning, and biomechanics. However, there are two main hurdles in achieving this goal. First, synthesizing a successful locomotion policy requires precise control of a high-dimensional under-actuated system and striking a balance among a set of conflicting goals such as walking forward, energy efficiency, and keeping balance. Second, the synthesized locomotion policy needs to generalize to new environments that were not present during optimization and training in order to cope with unexpected situations during execution.

 

To achieve this goal, we first introduce a Deep Reinforcement Learning (DRL) approach for learning locomotion controllers for simulated legged creatures without using motion data. We propose a loss term in DRL objective that encourages the agent to exhibit symmetric behavior and a curriculum learning approach that provides modulated physical assistance in order to achieve successful training of energy-efficient controllers. We demonstrate the results of this approach across a variety of simulated characters that, when we combine the two proposed ideas, achieve low-energy and symmetric locomotion gaits without relying on external motion data.

 

Next, we introduce a set of Transfer Learning (TL) algorithms that generalize the learned locomotion controllers to novel environments. Specifically, we focus on the problem of transferring a simulation-trained locomotion controller to a real legged robot, also known as the Sim-to-Real transfer problem. We first introduce a transfer learning algorithm that can successfully operate in unknown and changing dynamics within the training dynamics. To allow successful transfer outside the training environments, we further propose an algorithm that uses a limited amount of samples in the testing environments to adapt the simulation-trained policy. We demonstrate Sim-to-Real transfer for a biped robot, Robotis Darwin OP2, and a quadruped robot, Ghost Robotics Minitaur, respectively.

 

Finally, we consider the problem of safety during policy execution and transfer. We propose the training of a universal safe policy (USP) that controls the robot to avoid unsafe states from a diverse set of states, and an algorithm to combine a USP and a task policy to complete the task while acting safely. We demonstrate that the resulting algorithm can allow policies to adapt to notably different simulated dynamics with at most two failure trials, suggesting a promising path towards learning robust and safe control policies for sim-to-real transfer.

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd Defense
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Apr 20, 2020 - 1:39pm
  • Last Updated: Apr 20, 2020 - 1:39pm