A Grand Unified Theory of Artificial Intelligence

Mar 30, 2010

In the 1950s and '60s, artificial-intelligence researchers saw themselves as trying to uncover the rules of thought. But those rules turned out to be way more complicated than anyone had imagined. Since then, artificial-intelligence (AI) research has come to rely, instead, on probabilities -- statistical patterns that computers can learn from large sets of training data.

The probabilistic approach has been responsible for most of the recent progress in artificial intelligence, such as voice recognition systems, or the system that recommends movies to Netflix subscribers. But Noah Goodman, an MIT research scientist whose department is Brain and Cognitive Sciences but whose lab is and Artificial Intelligence, thinks that AI gave up too much when it gave up rules. By combining the old rule-based systems with insights from the new probabilistic systems, Goodman has found a way to model thought that could have broad implications for both AI and cognitive science.

Early AI researchers saw thinking as logical inference: if you know that birds can fly and are told that the waxwing is a bird, you can infer that waxwings can fly. One of AI’s first projects was the development of a mathematical language — much like a computer language — in which researchers could encode assertions like “birds can fly” and “waxwings are birds.” If the language was rigorous enough, computer algorithms would be able to comb through assertions written in it and calculate all the logically valid inferences. Once they’d developed such languages, AI researchers started using them to encode lots of commonsense assertions, which they stored in huge databases.

The problem with this approach is, roughly speaking, that not all birds can fly. And among birds that can’t fly, there’s a distinction between a robin in a cage and a robin with a broken wing, and another distinction between any kind of robin and a penguin. The mathematical languages that the early AI researchers developed were flexible enough to represent such conceptual distinctions, but writing down all the distinctions necessary for even the most rudimentary cognitive tasks proved much harder than anticipated.

Embracing uncertainty

In probabilistic AI, by contrast, a computer is fed lots of examples of something — like pictures of birds — and is left to infer, on its own, what those examples have in common. This approach works fairly well with concrete concepts like “bird,” but it has trouble with more abstract concepts — for example, flight, a capacity shared by birds, helicopters, kites and superheroes. You could show a probabilistic system lots of pictures of things in flight, but even if it figured out what they all had in common, it would be very likely to misidentify clouds, or the sun, or the antennas on top of buildings as instances of flight. And even flight is a concrete concept compared to, say, “grammar,” or “motherhood.”

As a research tool, Goodman has developed a computer programming language called Church — after the great American logician Alonzo Church — that, like the early AI languages, includes rules of inference. But those rules are probabilistic. Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly. But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can’t fly.

“With probabilistic reasoning, you get all that structure for free,” Goodman says. A Church program that has never encountered a flightless bird might, initially, set the probability that any bird can fly at 99.99 percent. But as it learns more about cassowaries — and penguins, and caged and broken-winged robins — it revises its probabilities accordingly. Ultimately, the probabilities represent all the conceptual distinctions that early AI researchers would have had to code by hand. But the system learns those distinctions itself, over time — much the way humans learn new concepts and revise old ones.

“What’s brilliant about this is that it allows you to build a cognitive model in a fantastically much more straightforward and transparent way than you could do before,” says Nick Chater, a professor of cognitive and decision sciences at University College London. “You can imagine all the things that a human knows, and trying to list those would just be an endless task, and it might even be an infinite task. But the magic trick is saying, ‘No, no, just tell me a few things,’ and then the brain — or in this case the Church system, hopefully somewhat analogous to the way the mind does it — can churn out, using its probabilistic calculation, all the consequences and inferences. And also, when you give the system new information, it can figure out the consequences of that.”

Modeling minds

Programs that use probabilistic inference seem to be able to model a wider range of human cognitive capacities than traditional cognitive models can. At the 2008 conference of the Society, for instance, Goodman and Charles Kemp, who was a PhD student in BCS at the time, presented work in which they’d given human subjects a list of seven or eight employees at a fictitious company and told them which employees sent e-mail to which others. Then they gave the subjects a short list of employees at another fictitious company. Without any additional data, the subjects were asked to create a chart depicting who sent e-mail to whom at the second company.

If the e-mail patterns in the sample case formed a chain — Alice sent mail to Bob who sent mail to Carol, all the way to, say, Henry — the human subjects were very likely to predict that the e-mail patterns in the test case would also form a chain. If the e-mail patterns in the sample case formed a loop — Alice sent mail to Bob who sent mail to Carol, and so on, but Henry sent mail to Alice — the subjects predicted a loop in the test case, too.

A program that used probabilistic inference, asked to perform the same task, behaved almost exactly like a human subject, inferring chains from chains and loops from loops. But conventional cognitive models predicted totally random e-mail patterns in the test case: they were unable to extract the higher-level concepts of loops and chains. With a range of collaborators in the Department of Brain and Cognitive Sciences, Goodman has conducted similar experiments in which subjects were asked to sort stylized drawings of bugs or trees into different categories, or to make inferences that required guessing what another person was thinking. In all these cases — several of which were also presented at the Society’s conference — Church programs did a significantly better job of modeling human thought than traditional algorithms did.

Chater cautions that, while Church programs perform well on such targeted tasks, they’re currently too computationally intensive to serve as general-purpose mind simulators. “It’s a serious issue if you’re going to wheel it out to solve every problem under the sun,” Chater says. “But it’s just been built, and these things are always very poorly optimized when they’ve just been built.” And Chater emphasizes that getting the system to work at all is an achievement in itself: “It’s the kind of thing that somebody might produce as a theoretical suggestion, and you’d think, ‘Wow, that’s fantastically clever, but I’m sure you’ll never make it run, really.’ And the miracle is that it does run, and it works.”

Explore further: MIT groups develop smartphone system THAW that allows for direct interaction between devices

Related Stories

'Now you see it, now you don't'

Feb 16, 2009

(PhysOrg.com) -- Queen Mary scientists have, for the first time, used computer artificial intelligence to create previously unseen types of pictures to explore the abilities of the human visual system.

Artificial intelligence -- child’s play!

Feb 02, 2009

Scientists have developed a computer game called “Gorge” - designed to help children understand artificial intelligence through play, and even to change it. It can also improve the children’s social interaction skills. ...

Expert: AI computers by 2020

Feb 17, 2008

A U.S. computer expert predicts computers will have the same intellectual capacity as humans by 2020.

Recommended for you

Wireless sensor transmits tumor pressure

4 hours ago

The interstitial pressure inside a tumor is often remarkably high compared to normal tissues and is thought to impede the delivery of chemotherapeutic agents as well as decrease the effectiveness of radiation ...

Tim Cook puts personal touch on iPhone 6 launch

6 hours ago

Apple chief Tim Cook personally kicked off sales of the iPhone 6, joining in "selfies" and shaking hands with customers Friday outside the company's store near his Silicon Valley home.

Team improves solar-cell efficiency

21 hours ago

New light has been shed on solar power generation using devices made with polymers, thanks to a collaboration between scientists in the University of Chicago's chemistry department, the Institute for Molecular ...

Calif. teachers fund to boost clean energy bets

21 hours ago

The California State Teachers' Retirement System says it plans to increase its investments in clean energy and technology to $3.7 billion, from $1.4 billion, over the next five years.

User comments : 31

Adjust slider to filter visible comments by rank

Display comments: newest first

marjon
1 / 5 (2) Mar 30, 2010
Nemo
2.6 / 5 (5) Mar 30, 2010
Every time I read an article like this I think our robotic overlords are one step closer to the door.
LariAnn
1 / 5 (3) Mar 30, 2010
"You will all be happy - and controlled."
Shootist
1.7 / 5 (6) Mar 30, 2010
Isn't it a foregone conclusion that if AI gains consciousness the rest of sentience (Us, I mean) are toast?
thales
5 / 5 (5) Mar 30, 2010
It sounds like a pretty straightforward Bayesian approach. I'd be surprised if this hasn't been done before.
frajo
3.3 / 5 (6) Mar 30, 2010
AI won't be a match for the human way of associative, intuitive, and irrational thinking for a long time to come.
Glyndwr
3.8 / 5 (5) Mar 30, 2010
the irrational part especially ;)
BigTone
4.6 / 5 (7) Mar 30, 2010
No need to be alarmist - those of us that actually build AI engines that are in use will tell you this article contains no breakthroughs of any sort and everything they are doing is well understood... This article belongs in a 1st year computer science intro to AI chapter...
otto1923
3 / 5 (3) Mar 30, 2010
Hard to imagine an AI would actually ever -want- anything that we didn't tell it to want. We could give it a sense of self-preservation I suppose, and then give it the ability to sense threats to it's physical makeup, power supply, etc. Curiosity about it's environment would be in direct proportion to it's programmed need to protect itself, just like us. Would it 'care' if we turned it off? Not unless we gave it the specific ability to do so. If we're smart, AI will always need us to tell it what to do.
jsa09
3.6 / 5 (5) Mar 30, 2010
otto - when you say "if we are smart" you automatically leave the door wide open to anything.

"We" are not smart that is proved countless times, this means that what we need to do is not the same as what we will do.
bottomlesssoul
2.3 / 5 (3) Mar 30, 2010
Humans naturally think in terms of knowing and believing while ignoring the possibilities of uncertainties and unknown unknowns. These are frequently (incorrectly) referred to as cognitive errors though it seems they should be called common default cognitive behaviors.

If you want this system to sound and learn like a normal human dumb it down a bit with our normal default cognitive behaviors. I bet it even gets religion :-)
Rynox77
4.5 / 5 (2) Mar 30, 2010
No need to be alarmist - those of us that actually build AI engines that are in use will tell you this article contains no breakthroughs of any sort and everything they are doing is well understood... This article belongs in a 1st year computer science intro to AI chapter...


Where could we find further reading?
plasticpower
5 / 5 (3) Mar 30, 2010
The breakthrough lies in the fact that a programming language was developed to support these concepts that used to be.. well until now they were just concepts. But for the first time, there is a standard language that allows you to program an AI that follows these rules, and given enough computational power, there should be no reason this can't simulate a mind. It's going to be a very different mind than a human one, but nevertheless, it will be able to reason and learn.
poi
2 / 5 (3) Mar 31, 2010
The trick to thinking is the ability to be able to stop thinking.
NeptuneAD
1.7 / 5 (3) Mar 31, 2010
Given enough power and free will, an AI will merge that is sentient, but will we call it a Cylon.

This has all happened before, it will all happen again.
EvgenijM
4 / 5 (1) Mar 31, 2010
Noah, Goodman, Church... thought it was already April 1st, but he actually exists - http://www.mit.edu/~ndg/
GaryB
5 / 5 (2) Mar 31, 2010
Isn't it a foregone conclusion that if AI gains consciousness the rest of sentience (Us, I mean) are toast?


Sadly, no. Most of what you do and who you are derives not from "intelligence" but from values/motivations etc. Robots and AIs will do what they value. Humans had to survive and so developed the 4f's fighting/fleeing/feeding/mating. Robots don't/won't necessarily have the same motivations.
Yoaker
3.3 / 5 (3) Mar 31, 2010
Article: "they’re currently too computationally intensive to serve as general-purpose mind simulators"

and always will be because it is an NP-problem, i.e. computational power is irrelevant.

PlasticPower: "and given enough computational power, there should be no reason this can't simulate a mind"

Sorry, this is (most likely) wrong for the above reason. It will take another "unknown" approach..

Runoxx7: "Where could we find further reading?"

I would recommend Sir Roger Penrose, especially "the emperor's new mind".
stonehat
Mar 31, 2010
This comment has been removed by a moderator.
plasticpower
4.5 / 5 (2) Mar 31, 2010
A problem becomes NP-complete when no polynomial time algorithm that finds an ideal solution exists. Approximation algorithms don't seek the ideal solution, they provide a "good enough" solution, therefore one can always devise a polynomial time approximation algorithm. It might still be slow, but it won't be intractable. The article is talking about a language that is obviously using a type of approximation algorithm to make assumptions, which means you can put higher and lower bounds on how far deep it will "think" before spitting out an approximation answer. Just like your brain does when it acts on an input that isn't complete or exact.
Yoaker
1 / 5 (1) Mar 31, 2010
Article, -they’re currently too computationally intensive to serve as general-purpose mind simulators-

and always will be because it is an NP-problem, i.e. computational power is irrelevant.

PlasticPower, -and given enough computational power, there should be no reason this can't simulate a mind-

Sorry, that is (most likely) wrong for the above reason. It will take another 'unknown' approach..

Runoxx7, -Where could we find further reading?-

BigTone is right, except that there are probably no good AI textbooks...I would recommend Sir Roger Penrose, especially 'the emperor's new mind'.
Yoaker
4 / 5 (1) Mar 31, 2010
A problem becomes NP-complete when no polynomial time algorithm that finds an ideal solution exists. Approximation algorithms don't seek the ideal solution, they provide a "good enough" solution, therefore one can always devise a polynomial time approximation algorithm. It might still be slow, but it won't be intractable. The article is talking about a language that is obviously using a type of approximation algorithm to make assumptions, which means you can put higher and lower bounds on how far deep it will "think" before spitting out an approximation answer. Just like your brain does when it acts on an input that isn't complete or exact.


I do not think this reasoning holds, but I cannot think of a rigid argument at the moment - would it not be strange if so difficult problems can simply be approximated?..is it always the case that the approximation is good enough?..
Yoaker
4.5 / 5 (2) Mar 31, 2010
@plasticpower:
The example with the brain does not really show that it approximates solutions to NP-problems, the fact that it can approximate an answer in the face of incomplete or inexact input is a feature of content-addressable memory, which is an NP-problem, but it does not follow that the solution to that NP-problem is in itself an approximation. I do agree that since the brain is physical it seems reasonable to conclude that it must somehow approximate unless NP=P, but in a larger perspective is it just as reasonable to conclude that the brain takes advantage of some unknown or overlooked non-computational phenomenon as Penrose suggests given that our understanding of physics is incomplete..

Would setting bounds on the approximation not be just as hard as the original problem?, i.e. we do not know in advance when an approximation will be good enough.
Assaad33
not rated yet Mar 31, 2010
What about genetic algorithms in artificial intelligence?
DudeGuy543
Mar 31, 2010
This comment has been removed by a moderator.
krundoloss
5 / 5 (1) Mar 31, 2010
I think we are going to really get advancement in this field with more neuron - electronic interfaces. Brain on a chip anyone? I mean, why reinvent the wheel, right? If we use real neurons and interface with them electronically, theres no reason we cant engineer our own, perhaps even grown from artificial DNA. I feel that would be easier and better than trying to teach a Pentium processor to THINK.
abhishekbt
3 / 5 (2) Apr 01, 2010
The trick to thinking is the ability to be able to stop thinking.


Hmm... Thats very ... thoughtful.
FrancisR77
1.6 / 5 (5) Apr 03, 2010
Robots are Robots; They can never be human no matter how conditioned or programmed they are. They do not have a mind.
Recovering_Human
5 / 5 (2) Apr 03, 2010
You have to be human to have a mind? How's that?
gort
not rated yet Apr 04, 2010
now just couple church with a evolutionary algorithm that re-writes church (over and over again)using natural selection to pick the "best" version from each generation.

evolutionary algoriths are the computer equivalent of biological evolution.
example:
http://www.inklin...-genius/
Birthmark
not rated yet Apr 04, 2010
Reading these comments I see why this century will depend on our existence or not, people believe out technology is going to take over us and kill the entire human race lmao! I need an underground bunker so I last through this century...
hagureinu
5 / 5 (2) Apr 05, 2010
every time i read an article about AI, like this one, i'm amazed how primitive our knowledge about cognitive functioning is and how pompous article titles are. hey, come on! where exactly do you see "Grand Unifying Theory"? i see only a questionable idea about "probabilistic inference", which is very doubtful to have anything do with AI at all. AI is about learning and awareness, about building, verifying and using cognitive patterns, about predicting reality outcomes based on internalized models etc.
gwrede
2 / 5 (3) Apr 05, 2010
They're doing something right. Their system mimics a human process called Categorisation. Much of a child's early work in laying a basis for coherently experiencing the world, is by categorising everything. Moving vs. stationary, soft vs. hard, Mummy vs. other people, edibles vs. objects, loose vs. fixed, tasty vs. yaccy.

These categories are refined (split up) constantly. Hard to very hard vs. hardish, moving to humans vs. moving non humans, MNHs to passive MNH vs. active MNH (meaning rolling balls or thrown objects or wind-up toys vs. bees, flies dogs).

One could claim that prejudice is merely a lack of refinement in categorisation. Good examples of this are the former President's only two categories of people: the good and the bad guys, and many less educated individuals' race dependent attitudes.

So, this actually is fundamental. Kudos to the guys.