Challenging the limits of learning: Human mind vs. yardstick of a machine

Jan 19, 2011

Although we're convinced that baby is brilliant when she mutters her first words, cognitive scientists have been conducting a decades-long debate about whether or not human beings actually "learn" language.

Most theoretical linguists, including the noted researcher Noam Chomsky, argue that people have little more than a "language organ" -- an inherent capacity for language that's activated during . On the other hand, researchers like Dr. Roni Katzir of Tel Aviv University's Department of Linguistics insist that what humans can actually learn is still an open question -- and he has built a computer program to try and find an answer.

"I have built a computer program that learns basic grammar using only the bare minimum of cognitive machinery -- the bare minimum that children might have -- to test the hypothesis that language can indeed be learned," says Dr. Katzir, a graduate of the Massachusetts Institute of Technology (where he took classes taught by Chomsky) and a former faculty member at Cornell University. His early results suggest that the process of language acquisition might be much more active than the majority of linguists have assumed up until now.

Dr. Katzir's work was recently presented at a Cornell University workshop, where researchers from fields in linguistics, psychology, and computer science gathered to discuss learning processes.

A math model in mind

Able to learn basic grammar, the relies on no preconceived assumptions about language or how it might be learned. Still in its early stages of development, the program helps Dr. Katzir explore the limits of learning -- what kinds of information can a complex cognitive system like the human mind acquire and then store at the unconscious level? Do people "learn" language, and if so, can a computer be made to learn the same way?

Using a type of machine learning known as "unsupervised learning," Dr. Katzir has programmed his computer to "learn" simple grammar on its own. The program sees raw data and conducts a random search to find the best way to characterize what it sees.

The computer looks for the simplest description of the data using a criterion known as Minimum Description Length. "The process of human learning is similar to the way computers compress files: it searches for recognizable patterns in the data. Let's say, for instance, that you want to describe a string of 1,000 letters. You can be very naïve and list all the letters in order, or you can start to notice patterns -- maybe every other character is a vowel -- and use that information to give a more compact description. Once you understand something better, you can describe it more efficiently," he says.

Artificial intelligence for answering machines

His early results point to the conclusion that the computer, modeling the human mind, is indeed able to "learn" -- that language acquisition need not be limited to choosing from a finite series of possibilities.

While it's primarily theoretical, Dr. Katzir's research may have applications in technologies such as voice dialogue systems: a computer that, on its own, can better understand what callers are looking for. A more advanced version of Dr. Katzir's program might learn natural grammar and be able to process data received in a realistic setting, reflecting the manner in which humans actually talk.

The results of the research might also be applied to study how we learn to "read" visual images, and may be able to teach a robot how to reconstruct a three-dimensional space from a two-dimensional image and describe what it sees. Dr. Katzir plans to pursue this line of research with engineering colleagues at Tel Aviv University and abroad.

"Many linguists today assume that there are severe limits on what is learnable," Dr. Katzir says. "I take a much more optimistic view about those limitations and the capacity of humans to learn."

Explore further: Objectification in romantic relationships related to sexual pressure and coercion

Related Stories

Toddlers develop individualized rules for grammar

Oct 05, 2009

(PhysOrg.com) -- Using advanced computer modeling and statistical analysis, a University of Texas at Austin linguistics professor has found that toddlers develop their own individual structures for using language that are ...

New study may revolutionize language learning

Jan 27, 2009

(PhysOrg.com) -- The teaching of languages could be revolutionised following ground-breaking research by Victoria University, New Zealand, PhD graduate Paul Sulzberger. Dr Sulzberger has found that the best way to learn a ...

Study looks at how children learn new words

Nov 15, 2007

Is it a plane? Is it a car? Is it a thingywhatsit? A new research project at the University of Sussex (UK) aims to find out more about how children acquire language.

Understanding Infant Language Learning

Aug 04, 2010

(PhysOrg.com) -- University of Arizona professor LouAnn Gerken has earned a grant to study the accuracy of a fairly new theory that explains how infants aquire knowledge.

Turn off TV to teach toddlers new words

Jun 28, 2007

Toddlers learn their first words better from people than from Teletubbies, according to new research at Wake Forest University. The study was published in the June 21 issue of Media Psychology.

Recommended for you

User comments : 14

Adjust slider to filter visible comments by rank

Display comments: newest first

droid001
2.3 / 5 (3) Jan 19, 2011
The problem is - machine or program do not understand meanings of what it hears.
CSharpner
4.3 / 5 (6) Jan 19, 2011
The problem is - machine or program do not understand meanings of what it hears.

Only those that don't understand machine learning and programming would assume that. Considering the human brain IS a machine, by definition, machines CAN understand meanings. Whether software runs on bio-cells or silicon cells is irrelevant. With the right logic structure they can and do learn and can and do "understand". There are varying degrees to "understanding meaning", but there's no logical reason why a silicon mind can't learn and understand in a similar manner to how biological computers can.
num3472
3.7 / 5 (3) Jan 19, 2011
I'm a undergraduate researcher in machine learning, and a decent programmer, and I'm a little disturbed at you, Michael

droid001 seems to be correct- from the article's description, it seems as though the program is learning grammar, not meanings. It's not trying to learn meanings. And until we can get it to learn meaning, it's not going to be any use to droid001.
He needs something more like the IBM jeopardy computer, watson.

droid001 wasn't elegant, but he wasn't ignorant.
plasticpower
not rated yet Jan 20, 2011
I agree with both comments above me, but I have to say something about what the article had to say about learning to reconstruct 3D shapes from 2D pictures. If a computer can learn to do such a task reliably, doesn't that mean that the computer has a complete "understanding" of at least the shape of an object? It's not a far stretch to teach a computer to associate a 3D shape of an object it saw in a picture with an object it might see in real life. It then might be possible for a computer to learn how the object is used. Then, if you give this computer program a purpose (task) and the means to accomplish it via a manipulation of objects it learned about, the resulting machine might actually appear creepily intelligent.

At this point, if you teach it to understand spoken commands, and give it a few rules to question these commands (whether it's OK to carry them out) you might have an AI that might resemble the intelligence of a child..
CSharpner
5 / 5 (3) Jan 20, 2011
I'm a undergraduate researcher in machine learning, and a decent programmer, and I'm a little disturbed at you
Didn't mean to get you disturbed.
droid001 seems to be correct- from the article's description, it seems as though the program is learning grammar, not meanings
droid001 made a generalized statement that machines can't understand meanings. That is incorrect. They certainly can. Whether they do today or not is not my point. My point is machines are perfectly capable of understanding, given the right programming. Also, like I said, there are degrees of "understanding". I've been programming since 1982 and consider myself a kick-a$$ programmer (it's a flaw in most of us that we consider ourselves that). I also have intense interest in AI and the workings of thought in the human brain. I can tell you, as a matter of fact, that machines absolutely CAN (not necessarily DO, but CAN) understand meaning.
frajo
not rated yet Jan 20, 2011
I can tell you, as a matter of fact, that machines absolutely CAN (not necessarily DO, but CAN) understand meaning.
In a very limited way only. You forget that out there are things like literature, poems, lyrics, paintings, music which more often than not have meanings you cannot adequately express using machines.
CSharpner
5 / 5 (3) Jan 20, 2011
In a very limited way only. You forget that out there are things like literature, poems, lyrics, paintings, music which more often than not have meanings you cannot adequately express using machines.
I disagree. In fact, the arts and even comedy are the very things I had in mind. Those are understood via their context. You need a large database of experience and memories to references to understand them, but they are, nonetheless, understandable. Again, anything a human brain can understand, by definition, a computer can, since a brain IS a computer. Add enough processors, memory, and the right kind of programming, and a silicon brain can comprehend anything a biological brain can, up to and including emotions. Not all AI is hard coded with code like:

if (Joke in Jokes.List) then result = funny;

AI can be coded at a fundamental level of basically simulating neurons interacting (as in neural networks). "Intelligence" can become an emergent property.
frajo
5 / 5 (1) Jan 23, 2011
In a very limited way only. You forget that out there are things like literature, poems, lyrics, paintings, music which more often than not have meanings you cannot adequately express using machines.
I disagree. In fact, the arts and even comedy are the very things I had in mind. Those are understood via their context.
Whose context?
You need a large database of experience and memories to references to understand them, but they are, nonetheless, understandable. Again, anything a human brain can understand, by definition, a computer can, since a brain IS a computer.
What is "a human brain"? That of Kim Peek? That of Leonardo da Vinci? How many people _understand_ James Joyce's Finnegans Wake, Arno Schmidt's Zettels Traum, Picasso's Guernica, Penderecki's Threnos, the references in the Chinese original of "Journey to the West", the philosophical implications of "Hard to Be a God" by the Strugatsky brothers? Are contexts enumerable? What does "understanding" mean?
CSharpner
5 / 5 (1) Jan 23, 2011
Whose context?
Not "whose". Just like the human mind finds context from their own knowledge base, so would AI.
What is "a human brain"? That of Kim Peek?...
You're missing the point. If humans can understand meaning, so can machines, since human brains ARE machines.
What does "understanding" mean?
THAT, my friend, is an EXCELLENT question! But, without answering it, I CAN say that whatever type of low-level logic is used in a human brain to "understand meaning", so to can a silicon brain.

I fived you because of that question, BTW.
El_Nose
not rated yet Jan 24, 2011
but people who code neural network no that thinking is not what neural networks do -- they are basically a super general supervised modelling technique that does not think - but gives repsonces based on input and prior data that has been validated by a third party to ensure proper training.

Don't through terms around and use them in unrealisitic ways just becuase most of the populace has no clue to their meaning.

Neural networks do not create thought nor are they a good approximation... what they are and what they do is take data and give responces based on that data -- and can take a validation of their responce to form new ones for future data... It was thought by Kant that this is the basis of thought -- but pure networks prove this to be false. But we can create a computer with an opinion -- no matter how stupid that opinion tends to be.
El_Nose
3 / 5 (1) Jan 24, 2011
AI has two basic flavors

the one Mr. Sharpner is refering to has been given up on by the CS research community & that is mimicking human intelligence. What is focussed on now is breaking down human perception into parts and getting that part right before attempting to recreate an entire consciousness.

The simple fact of the matter is AI is a program that can do a task once done by humans better than a human can. Such as play a game, build a car, steer a car, trade on the stock market.

But creation of a computer that can pass a Turing Test( a test where a human asks questions & based on the responces the human cannot tell if they are interacting with a machine ) -- is purely an academic exercise

BTW the jeapordy gameshow with the computer built by IBM is an underdog. But how much it wagers & its next picks should be interesting, does it go for the hard questions or start with the easy ones? what AI controls how it will bet daily doubles? it is an experiment in context & AI
CSharpner
not rated yet Jan 25, 2011
Don't through terms around and use them in unrealisitic ways just becuase most of the populace has no clue to their meaning.
I didn't and it certainly wasn't "because most of the populace has no clue...". The human brain is a neural network. With limited space here, I can't go into a full explanation of both artificial neural nets and biological neural nets. The point is the same though. Machines CAN "understand meaning". Whichever method will or could be used isn't the issue. The issue is "Can it be done?" and the answer is a resounding "YES!"
Neural networks do not create thought nor are they a good approximation... what they are and what they do is take data and give responces based on that data
The space here is too limited to discuss the fine details, though you're partly misunderstanding what I'm saying, but what you just described that NNs do is what the brain does. Of course, our limited uses of NNs aren't a full brain and only roughly simulate a few neurons.
CSharpner
not rated yet Jan 25, 2011
El_Nose, I three'd ya. Your understanding of neural nets seems to be pretty good, which is worth a 4 or 5, but you had missed my primary point somewhat. But, it looks like you might agree with it:
But we can create a computer with an opinion -- no matter how stupid that opinion tends to be.
Not sure though because I can't tell from that whether you're saying they CAN "understand meaning" or not. Anyway, my point is that regardless of the type of programming that will eventually (or won't but could have been) use, machines CAN "understand meaning". My ref to NNs:
(as in neural networks)
was a side comment as a simple example. I don't disagree that the small, limited NNs (compared to human cell NNs) that have so far been implemented are not "thought producing" mechanisms. But, if you drill down into a working human brain, you find large, complex, neural networks where actual thought occurs. Yes, implemented somewhat differently than the limited NNs in software today.
El_Nose
not rated yet Jan 27, 2011
did not another group just release findings that babies language centers are just as developed as adults and they use the same cues as adults to interpret meanings and to find context.??