Behavior Imaging Seminar - Dimitris Metaxas

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Wednesday April 18, 2012 - Thursday April 19, 2012
      4:00 pm - 4:59 pm
  • Location: Nano Building, Room 1116
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact

Stephanie Tofighi

School of Interactive Computing

404-385-7450

Summaries

Summary Sentence: Behavior Analysis based on Visual Input in Structured and Unstructured Environments

Full Summary: Visiting professor from Rutgers University will present talk on Behavior Imaging titled: Behavior Analysis based on Visual Input in Structured and Unstructured Environments

Behavior Analysis based on Visual Input in Structured and Unstructured Environments

People interact visually based on body movements and especially with the face. Over the past twenty years we have been developing methods for behavior analysis from  visual input. In particular we have developed a novel class of stochastic deformable models which allow the fitting of a generic deformable model to any face or body based on minimal training. In case of faces, our system allows real time tracking even if the face is rotated up to 90o based on a linearization of the related facial parameter manifold.  By combining our existing 2D deformable face model with a new 3D deformable face model, we correct for the warped appearance of faces due to variations in the 3D head pose, thereby leading to pose invariant facial features that in turn can lead to improved detection of facial expressions for non-frontal poses. The benefit of this transformation of the face region to a frontal pose is that it filters out the effects of head orientation from the appearance features that we compute from the given region. Therefore, we can learn pose-independent recognition models and with less training data because there is less variation in facial appearance than when the head is allowed to rotate freely in 3D. In addition, new methods for 3D body tracking from single video sequences will also be shown.

  These new methods have allowed us to analyze a whole variety of behaviors from a single video stream which include structured and unstructured environments.  We will demonstrate the application  of our methods in the analysis of ASL (manual and non-manual components), emotions, autism, alcohol studies, deception and synchrony. Finally, we will conclude with future research directions which include the fusion of multiple modalities.

 

Collaborators: C. Neidle (BU), M. Bates (Rutgers), J. Burgoon (UA), C. Vogler (Gallaudet)

Additional Information

In Campus Calendar
No
Groups

College of Computing

Invited Audience
No audiences were selected.
Categories
Seminar/Lecture/Colloquium
Keywords
Behavior Imaging, computer vision, Visual Input
Status
  • Created By: Stephanie Tofighi
  • Workflow Status: Published
  • Created On: Apr 13, 2012 - 8:39am
  • Last Updated: Oct 7, 2016 - 9:58pm