This wiki is not longer actively used and, due to years of spam, has unfortunately been locked from further editing by anonymous users. Only approved users can edit the wiki or add content. If you would like to contribute to this wiki, please contact the administrator Benjamin Mako Hill.

Mental Imagery for Conversational Robots

From Pedia
Jump to navigation Jump to search

Kai-yuh Hsaio, Nick Mavridis

Mental Imagery for Conversational Robots

How are people able to think about things that are not directly accessible to their senses? What is required for a machine to talk about entities that are out of sight, have happened in the past, or that depend on a view of the world from someone else's perspective? To address these and related questions, we are developing an architecture that consists of a tightly coupled pair of systems: a physical robot (Ripley) and a virtual world that reflects Ripley's mental model. Ripley is an interactive robot with vision, speech and grasping capabilities. The world model is constructed from a physics simulator that models dynamics of physical worlds. This framework provides a foundation for our ongoing experiments in developing new models of natural language processing in which words are grounded in terms of sensory-motor representations. The use of a world model allows us to explore new models of word meanings that move beyond purely associative approaches of connecting words to sensors and motors. This work has applications in creating flexible and natural multimodal human-machine interfaces, and can also serve as a foundation for learning in our other projects, such as verb grounding.

back to Cognitive_Machines