*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
University of Maryland’s Yiannis Aloimonos presents “The Manipulation Action Grammar: A Key to Intelligent Robots” as part of the IRIM Robotics Seminar Series. The event will be held in the Marcus Nanotechnology Bldg., Rooms 1116-1118, from 12-1 p.m. and is open to the public.
Abstract
Humanoid robots will need to learn the actions that humans perform. They will need to recognize these actions when they see them and they will need to perform these actions themselves. In this presentation, it is proposed that this learning task can be achieved using the manipulation grammar.
Context-free grammars have been in fashion in linguistics because they provide a simple and precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks. Also, the basic recursive structure of natural languages, the way in which clauses nest inside other clauses, and the way in which lists of adjectives and adverbs are followed by nouns and verbs, is described exactly. Similarly, for manipulation actions, every complex activity is built from smaller blocks involving hands and their movements, as well as objects, tools and the monitoring of their state. Thus, interpreting a “seen” action is like understanding language, and executing an action from knowledge in memory is like producing language. Several experiments will be shown interpreting human actions in the arts and crafts or assembly domain, through a parsing of the visual input, on the basis of the manipulation grammar. This parsing, in order to be realized, requires a network of visual processes that attend to objects and tools, segment them and recognize them, track the moving objects and hands, and monitor the state of objects to calculate goal completion. These processes will also be explained and we will conclude with demonstrations of robots learning how to perform tasks by watching videos of relevant human activities.
—Work funded by the European Union under the project POETICON++ in the Cognitive Systems Program, and the National Science Foundation under the project “Robots with Vision that Find Objects” in the Cyber Physical Systems program, and DARPA under a program on autonomy.
—Joint work with Yezhou Yang and Cornelia Fermuller
Bio
Yiannis Aloimonos is a professor of computational vision and intelligence in the Department of Computer Science at the University of Maryland, College Park, and the director of the Computer Vision Laboratory at the Institute for Advanced Computer Studies (UMIACS). He is also affiliated with the Institute for Systems Research and the Neural and Cognitive Science Program. He was born in Sparta, Greece and studied mathematics in Athens and computer science at the University of Rochester, NY (Ph.D. 1990). He is interested in active perception and the modeling of vision as an active, dynamic process for real-time robotic systems. For the past five years he has been working on bridging signals and symbols, specifically on the relationship of vision to reasoning, action and language.