*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Analyzing Health-related Behaviors Using First-person Vision
Committee:
Dr. James Rehg, IC, Chair , Advisor
Dr. Mark Clements, ECE, Co-Advisor
Dr. Omer Inan, ECE
Dr. Gregory Abowd, CoC
Dr. Thomas Ploetz, IC
Dr. Nabil Alshurafa, Nortwestern
Abstract: Developing computation models for human behaviors analysis has been a interesting topic in ubiquitous computing. Wearable devices are widely used in health-care industry and more personalized functions are expected by the practitioners and users. The signals from wearable devices not only record the physiological status of the subject, the environment they are exposed to, but also implicitly record their attention and intention. Extracting health-related info from the huge amount of data collect from various devices is a challenging topic in the research field and also promising one in health industry. This talk focus on how to effectively analyze data collected from wearable devices. We will start from videos recorded from wearable cameras to multi-modal signals. First, I will talk about models trained for recognizing hand-object manipulation actions from first person vision. Then, I will present a method to detect the screen-using moments of a wearable camera user. It can find out when the person is looking at a screen and localize the screen in the frame without a eye-tracker. In the third part, I will work with multi-modal signals. I will build a weighted kernel density estimation model to predict the correspondence between video and acceleration.