*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Speaker:
Amanda Stent
Director of Research, Yahoo
Title:
Two Methods for Easing Video Consumption
Abstract:
Content on the world wide web increasingly takes the form of video; consequently, it is important both to analyze and to summarize video in order to facilitate search, personalization, browsing, etc. In this talk I will present two projects from Yahoo Labs devoted to different aspects of video processing. First, I will present a method for automatic creation of a well-formatted, readable transcript for a video from closed captions or ASR output. Readable transcripts are a necessary precursor to indexing, ranking and content-based summarization of videos. Our approach uses acoustic and lexical features extracted from the video and the raw transcription/caption files. Empirical evaluations of our approach show that it outperforms baseline methods. Second, I will present a method for video summarization that uses title-based image search results to find visually important shots. A video title is often carefully chosen to be maximally descriptive of the video’s main topic, and hence images related to the title can serve as a proxy for important visual concepts of the main topic. However, images searched using the title can contain noise (images irrelevant to video content) and variance (images of different topics). Our approach to video summarization is a novel co-archetypal analysis technique that learns canonical visual concepts shared between video and images, but not in either alone, by finding a joint-factorial representation of the two data sets. Experimental results show that our approach produces superior quality summaries compared to several recently proposed approaches. I will conclude the talk with some ideas for future work on video summarization using multimodal representations.
Bio:
Amanda Stent manages researchers at Yahoo Labs who work on analysis and summarization of web content (text, images and video). Previously, she was a Principal Member of Technical Staff at AT&T Labs -- Research in NJ, and before that an associate professor in the Computer Science Department at Stony Brook University in Stony Brook, NY. She holds a PhD in computer science from the University of Rochester. She has authored over 80 papers on natural language processing and holds several patents. She is president of the ACL/ISCA Special Interest Group on Discourse and Dialog and one of the rotating editors of the journal Dialogue & Discourse.