Computer scientists form mathematical formulation of the brain's neural networks

Apr 02, 2012

As computer scientists this year celebrate the 100th anniversary of the birth of the mathematical genius Alan Turing, who set out the basis for digital computing in the 1930s to anticipate the electronic age, they still quest after a machine as adaptable and intelligent as the human brain.

Now, computer scientist Hava Siegelmann of the University of Massachusetts Amherst, an expert in neural networks, has taken Turing's work to its next logical step. She is translating her 1993 discovery of what she has dubbed "Super-Turing" computation into an adaptable that learns and evolves, using input from the environment in a way much more like our brains do than classic Turing-type computers. She and her post-doctoral research colleague Jeremie Cabessa report on the advance in the current issue of .

"This model is inspired by the brain," she says. "It is a mathematical formulation of the brain's neural networks with their adaptive abilities." The authors show that when the model is installed in an environment offering constant like the real world, and when all stimulus-response pairs are considered over the machine's lifetime, the Super Turing model yields an exponentially greater repertoire of behaviors than the or Turing model. They demonstrate that the Super-Turing model is superior for human-like tasks and learning.

"Each time a Super-Turing machine gets input it literally becomes a different machine," Siegelmann says. "You don't want this for your PC. They are fine and fast calculators and we need them to do that. But if you want a robot to accompany a blind person to the grocery store, you'd like one that can navigate in a dynamic environment. If you want a machine to interact successfully with a human partner, you'd like one that can adapt to idiosyncratic speech, recognize facial patterns and allow interactions between partners to evolve just like we do. That's what this model can offer."

Classical computers work sequentially and can only operate in the very orchestrated, specific environments for which they were programmed. They can look intelligent if they've been told what to expect and how to respond, Siegelmann says. But they can't take in new information or use it to improve problem-solving, provide richer alternatives or perform other higher-intelligence tasks.

In 1948, Turing himself predicted another kind of computation that would mimic life itself, but he died without developing his concept of a machine that could use what he called "adaptive inference." In 1993, Siegelmann, then at Rutgers, showed independently in her doctoral thesis that a very different kind of computation, vastly different from the "calculating computer" model and more like Turing's prediction of life-like intelligence, was possible. She published her findings in Science and in a book shortly after.

"I was young enough to be curious, wanting to understand why the Turing model looked really strong," she recalls. "I tried to prove the conjecture that neural networks are very weak and instead found that some of the early work was faulty. I was surprised to find out via mathematical analysis that the neural models had some capabilities that surpass the Turing model. So I re-read Turing and found that he believed there would be an adaptive model that was stronger based on continuous calculations."

Each step in Siegelmann’s model starts with a new Turing machine that computes once and then adapts. The size of the set of natural numbers is represented by the notation aleph-zero, 0, representing also the number of different infinite calculations possible by classical Turing machines in a real-world environment on continuously arriving inputs. By contrast, Siegelmann’s most recent analysis demonstrates that Super-Turing computation has 20, possible behaviors. "If the Turing machine had 300 behaviors, the Super-Turing would have 2300, more than the number of atoms in the observable universe," she explains.

The new Super-Turing machine will not only be flexible and adaptable but economical. This means that when presented with a visual problem, for example, it will act more like our human brains and choose salient features in the environment on which to focus, rather than using its power to visually sample the entire scene as a camera does. This economy of effort, using only as much attention as needed, is another hallmark of high artificial intelligence, Siegelmann says.

"If a Turing machine is like a train on a fixed track, a Super-Turing machine is like an airplane. It can haul a heavy load, but also move in endless directions and vary its destination as needed. The Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way much closer to that of the constantly adapting and evolving brain," she adds.

Siegelmann and two colleagues recently were notified that they will receive a grant to make the first ever Super-Turing computer, based on Analog Recurrent . The device is expected to introduce a level of intelligence not seen before in artificial computation.

Explore further: Powerful new software plug-in detects bugs in spreadsheets

Provided by University of Massachusetts at Amherst

4.2 /5 (19 votes)

Related Stories

Archive of WWII codebreaker Alan Turing preserved

Feb 25, 2011

(AP) -- Papers relating to codebreaker and computer pioneer Alan Turing will go to a British museum after the National Heritage Memorial Fund stepped in to help buy them for the nation.

Mathematician sees artistic side to father of computer

Feb 23, 2012

This year a series of events around the world will celebrate the work of Alan Turing, the father of the modern computer, as the 100th anniversary of his birthday approaches on June 23. In a book chapter that ...

Computing a way through the Turing barrier

Feb 22, 2005

Mathematicians working in an emerging field somewhere between physics, computer science and philosophy are investigating new ways of ‘computing the incomputable’ which could radically broaden our understanding of the ...

Turing award goes to 'machine learning' expert

Mar 09, 2011

A Harvard University professor has been awarded a top technology prize for research that has paved the way for computers that more closely mimic how humans think, including the one that won a "Jeopardy!" tournament.

Recommended for you

Researchers developing algorithms to detect fake reviews

Oct 21, 2014

Anyone who has conducted business online—from booking a hotel to buying a book to finding a new dentist or selling their wares—has come across reviews of said products and services. Chances are they've also encountered ...

User comments : 9

Adjust slider to filter visible comments by rank

Display comments: newest first

Henka
5 / 5 (4) Apr 02, 2012
Finally, something positive about AI.
Pyle
2.3 / 5 (3) Apr 02, 2012
Positive? What is positive about researchers being granted money to bring us one step closer to enslavement by our creations?

Not that I don't welcome our robot overlords. (I sure hope they can parse my double negative.)
jscroft
2 / 5 (4) Apr 02, 2012
There's an exponent missing here.

By contrast, Siegelmann's most recent analysis demonstrates that Super-Turing computation has 20ALEPH possible behaviors. "If the Turing machine had 300 behaviors, the Super-Turing would have 2300, more than the number of atoms in the observable universe," she explains.


should read

By contrast, Siegelmann's most recent analysis demonstrates that Super-Turing computation has 2^0ALEPH possible behaviors. "If the Turing machine had 300 behaviors, the Super-Turing would have 2^300, more than the number of atoms in the observable universe," she explains.

examachine
1 / 5 (4) Apr 02, 2012
The exponent does not matter. There is no such thing as a real-valued computer. That is not a computer, it is not physically realizable. This Siegelmann character has based her entire "research" on ignorance of set theory and theory of computation. This press release is ridiculous. This theoretical charlatanry has certainly nothing to do with AI, because such real-valued computers cannot be physically realized. Not in any finite space-time, basically, you can't ever build it. I condemn this press release, it's a shame for the CS community.
bewertow
3.8 / 5 (4) Apr 03, 2012
The exponent does not matter. There is no such thing as a real-valued computer. That is not a computer, it is not physically realizable. This Siegelmann character has based her entire "research" on ignorance of set theory and theory of computation. This press release is ridiculous. This theoretical charlatanry has certainly nothing to do with AI, because such real-valued computers cannot be physically realized. Not in any finite space-time, basically, you can't ever build it. I condemn this press release, it's a shame for the CS community.


how about you publish a paper in response if you know so much?
Tausch
1 / 5 (2) Apr 03, 2012
Apologies to all. All of you will not live long enough to even touch the surface of an endeavor Siegelmann has envisaged.

Here is what you must mathematically express and model to realize AI:

http://medicalxpr...une.html

Microglia.
Without this, the word 'adaptability' is really another label for the word 'mockery'.

Congratulations and kudos on duping grant givers.
They deserve nothing less.
adamcrume
5 / 5 (1) Apr 03, 2012
@examachine: Just because the theoretical machine is not physically realizable doesn't automatically mean it is useless. Turing machines are not physically realizable either, because they have an infinite tape. However, real computers are originally based on a Turing model, even though their storage is finite.
jscroft
1 / 5 (3) Apr 03, 2012
@Examachine: I'm invoking the existence argument. If such things weren't realizable in polynomial time, we wouldn't be having this conversation.
HenisDov
1 / 5 (4) Apr 10, 2012
Consciousness is a brainchild, and the brain is a progeny of mono-cells communities evolution:

Origin Of Brained-Nerved Organisms

From http://universe-l...ilation/

Evolution of life, of mass formats self-replication:

RNA nucleotides Genes (organisms) to RNA and DNA genomes (organisms) to mono-cellular to multicellular organisms.
Individual mono-cells to cooperative mono-cells communities, cultures.
Mono-cells cultures to neural systems, then to nerved multicellular organisms.

Dov Henis
(comments from 22nd century)