PhD Proposal by Amirreza Shaban

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Friday October 11, 2019 - Saturday October 12, 2019
      12:00 pm - 1:59 pm
  • Location: CODA C1215 (12th floor)
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Meta-Learning Techniques for Few-Shot Object Recognition and Hyperparameter Tuning

Full Summary: No summary paragraph submitted.

Title: Meta-Learning Techniques for Few-Shot Object Recognition and Hyperparameter Tuning

Amirreza Shaban

School of Interactive Computing

College of Computing

Georgia Institute of Technology

 

Date: Friday, October 11, 2019

Time: 12:00 pm – 2:00 pm (EST)

Location: CODA C1215  (12th floor)

 

Committee:

Dr. Byron Boots (Advisor,  School of Interactive Computing, Georgia Institute of Technology)

Dr. James Hays (School of Interactive Computing, Georgia Institute of Technology)

Dr. Dhruv Batra (School of Interactive Computing, Georgia Institute of Technology)

Dr. Fuxin Li (School of Electrical Engineering and Computer Science, Oregon State University)

 

Abstract:

Deep Neural Networks are powerful at solving classification problems in computer vision. However, learning classifiers with these models requires a large amount of labeled training data, and recent approaches have struggled to adapt to new classes in a data-efficient manner. On the other hand, the human brain is capable of utilizing already known knowledge in order to learn new concepts with fewer examples and less supervision. Many meta-learning algorithms have been proposed to fill this gap but they come with their practical and theoretical limitations. We review the well-known bi-level optimization as a general framework for few-shot learning and hyperparameter optimization and discuss the practical limitations of computing the full gradient. We provide theoretical guarantees for the convergence of the bi-level optimization using the approximated gradients computed by the truncated back-propagation. In the next step, we propose an empirical method for few-shot semantic segmentation: instead of solving the inner optimization, we propose to directly estimate its result by a general function approximator. Finally, we will discuss extensions of this work with the focus on learning to find objects when full supervision is not available for the few training examples.

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd proposal
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Oct 4, 2019 - 2:42pm
  • Last Updated: Oct 4, 2019 - 2:42pm