Ph.D. Proposal Oral Exam - Yun Zhang

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Wednesday August 7, 2019 - Thursday August 8, 2019
      11:00 am - 12:59 pm
  • Location: Room 223, TSRB
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Analyzing Health-related Behaviors Using First-person Vision

Full Summary: No summary paragraph submitted.

Title:  Analyzing Health-related Behaviors Using First-person Vision

Committee: 

Dr. Rehg,

Dr. Clements, Co-Advisor       

Dr. Inan, Chair

Dr. Abowd

Abstract:

The objective of the proposed research is to develop computation models for human behaviors analysis. Wearable devices are widely used in health-care industry and more personalized functions are expected by the practitioners and users. The signals from wearable devices not only record the physiological status of the subject, the environment they are exposed to, but also implicitly record their attention and intention. Extracting health-related info from the huge amount of data collect from various devices is a challenging topic in the research field and also promising one in health industry. This thesis focus on how to effectively analyze data collected from wearable devices.  We will start from videos recorded from wearable cameras to multi-modal signals. First, I present a model for first-person action recognition. This model leverage the shared motion and appearance properties of different actions. It is able to predict novel actions that the model is not trained on. Second, I develop a method to detect the screen-using moments of a wearable camera user. It can find out when the person is looking at a screen and localize the screen in the frame without a eye-tracker. Finally, I propose to work with multi-modal signals. I will build a deep neural network to predict the correspondence between video and acceleration. I want to find the time-offset to synchronize two signals and explore how the learned embedding space can be used to in cross-modal retrieval and activity recognition tasks.

Additional Information

In Campus Calendar
No
Groups

ECE Ph.D. Proposal Oral Exams

Invited Audience
Public
Categories
Other/Miscellaneous
Keywords
Phd proposal, graduate students
Status
  • Created By: Daniela Staiculescu
  • Workflow Status: Published
  • Created On: Jul 25, 2019 - 5:49pm
  • Last Updated: Jul 25, 2019 - 5:49pm