This wiki is not longer actively used and, due to years of spam, has unfortunately been locked from further editing by anonymous users. Only approved users can edit the wiki or add content. If you would like to contribute to this wiki, please contact the administrator Benjamin Mako Hill.

LifeNet: Difference between revisions

From Pedia
Jump to navigation Jump to search
(cdarour)
m (Reverted edits by 194.176.176.82 (talk) to last revision by 66.31.1.76)
 
Line 1: Line 1:
letoolocopas
[http://web.media.mit.edu/~neptune/lifenet.html <center>http://web.media.mit.edu/~neptune/lifenet/lifenet-small.jpg</center>]
[http://web.media.mit.edu/~neptune/lifenet.html <center>http://web.media.mit.edu/~neptune/lifenet/lifenet-small.jpg</center>]



Latest revision as of 23:27, 10 October 2011

http://web.media.mit.edu/~neptune/lifenet/lifenet-small.jpg

The LifeNet temporal agent model

LifeNet performs commonsense temporal inference over any number of time spans and is a model of a single person's knowledge of the states of the world as they progress through time. The model is currently implemented as a simple probabilistic model (Markov Field with lookup tables) that performs belief propagation over any specified number of time slices.The model can be accessed via an XML-RPC API, which allows remote access to the LifeNet server from any programming language. A version of the LifeNet inference engine with knowledge is available for download. A LifeNet technical note explains the application of belief propagation and general procedural language for specifying probability distributions within LifeNet. There is a 3-slice inference LifeNet demo which shows the most basic query and truth inference over three time slices. A story (N-slice) inference and acquisition LifeNet demo shows some of the more general temporal inference capabilities of LifeNet to suggest possible story suggestions and encourages users to add morestories to the LifeNet database. The Commonsense research group has more details about the current research being developed on top of the LifeNet temporal inference engine and knowledge base.

Social expert LifeNets perform collaborative reasoning

Experts reason about the world and have learned models of the world in very abstract representations that are not directly communicable to a novice. However, the novice and expert share commonsense with one another that the expert can use to describe the results of his/her reasoning with the novice. Here is a paper that I wrote for Marvin Minksy's class. The paper outlines a rough sketch of how experts use commonsense to collectively reason about hard problems (problems that any one expert cannot solve alone). Experts think together to solve hard problems (pdf) (html)

Academic expert LifeNet domains

We are currently working on converting the OpenCourseWare raw text to the LifeNet format, so that we can begin to develop our model of expert communication and coordinated problem solving combining multiple academic experts that use commonsense as a common language to share ways to think.

Automatic domain segregation within LifeNet

Part of learning from everyday experience is our ability to categorize and segregate our knowledge into efficient domains of context specific ways to think. We're currently looking into ways to automatically segregate a large LifeNet into multiple domains of context specific ways to think that can be processed independently, allowing for many independent reasoning algorithms to be run in separate processes that communicate a minimal amount of information.

Using LifeNet as a user model

Since LifeNet is a model of the states of the world that any individual might find themselves in as they go about their daily lives, it makes sense that slightly different LifeNets might be able to infer temporal events in specific people's lives. In Push Singh's talk on The Future of the Mind he discusses the possible ramifications of such user modeling even possibly forming a sort of economy of human experience as our personal computational models of ourselves become more and more useful. We are currently using the public Enron Email dataset in order to build user models of individuals strictly from a list of emails that those individuals have sent and received.