How can a robot learn the foundations of knowledge?

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Friday January 15, 2010 - Saturday January 16, 2010
      11:00 am - 11:59 am
  • Location: Technology Square Research Building, Auditorium
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    $0.00
  • Extras:
Contact

Georgia Tech Media Relations
Laura Diamond
laura.diamond@comm.gatech.edu
404-894-6016
Jason Maderer
maderer@gatech.edu
404-660-2926

Summaries

Summary Sentence: The Artificial Intelligence and Cognitive Science Seminar Series prese

Full Summary: The Artificial Intelligence and Cognitive Science Seminar Series presents Benjamin Kuipers from the University of Michigan

The Artificial Intelligence and Cognitive Science Seminar Series presents Benjamin Kuipers from the University of Michigan and his talk "How can a robot learn the foundations of knowledge?"

An embodied agent experiences the physical world through low-level sensory and motor interfaces (the "pixel level"). However, in order to function intelligently, it must be able to describe its world in terms of higher-level concepts such as places, paths, objects, actions, goals, plans, and so on (the "object level"). How can higher-level concepts such as these, that make up the foundation of commonsense knowledge, be learned from unguided experience at the pixel level? I will describe progress on providing a positive answer to this question.

This question is important in practical terms: As robots are developed with
increasingly complex sensory and motor systems, and are expected to function over extended periods of time, it becomes impractical for human engineers to implement their high-level concepts and define how those concepts are grounded in sensor motor interaction. The same question is also important in theory: Must the knowledge of an AI system necessarily be programmed in by a human being, or can the concepts at the foundation of commonsense knowledge be learned from unguided experience?

BIO:
Benjamin Kuipers joined the University of Michigan in January 2009 as
Professor of Computer Science and Engineering. Prior to that, he held an
endowed Professorship in Computer Sciences at the University of Texas at
Austin. He received his B.A. from Swarthmore College, and his Ph.D. from
MIT. He investigates the representation of commonsense and expert knowledge, with particular emphasis on the effective use of incomplete knowledge. His research accomplishments include developing the TOUR model of spatial knowledge in the cognitive map, the QSIM algorithm for qualitative simulation, the Algernon system for knowledge representation, and the Spatial Semantic Hierarchy models of knowledge for robot exploration and mapping. He has served as Department Chair at UT Austin, and is a Fellow of AAAI and IEEE.

Faculty Host: Ashok Goel, Design Intelligence Lab
Student Host: Michael Helms, Design Intelligence Lab

Additional Information

In Campus Calendar
No
Groups

Digital Lounge - Digital Life, Digital Lounge

Invited Audience
No audiences were selected.
Categories
Conference/Symposium
Keywords
artificial, Intelligence, robot
Status
  • Created By: David Terraso
  • Workflow Status: Draft
  • Created On: Feb 16, 2010 - 9:48am
  • Last Updated: Oct 7, 2016 - 9:48pm