Math that powers spam filters used to understand how brain learns to move our muscles

Jun 01, 2007

A team of biomedical engineers has developed a computer model that makes use of more or less predictable “guesstimates” of human muscle movements to explain how the brain draws on both what it recently learned and what it’s known for some time to anticipate what it needs to develop new motor skills.

The engineers, from Johns Hopkins, MIT and Northwestern, exploited the fact that all people show similar “probable” learning patterns and use them to develop and fine tune new movements, whether babies trying to walk or stroke patients re-connecting brain-body muscle links.

In their report this week in Nature Neuroscience, the team says their new tool could make it possible to predict the best ways to teach new movements and help design physical therapy regimens for the disabled or impaired.

Reza Shadmehr, Ph.D., professor of biomedical engineering at Hopkins, who with his colleagues built the new model, says the artificial brain in the computer, like its natural counterpart, is guided in part by a special kind of statistical “probability” theory called Bayesian math.

Unlike conventional statistical analysis, a Bayesian probability is a subjective “opinion,” that measures a “learner’s” individual degree of belief in a particular outcome when that outcome is uncertain. The idea as applied to the workings of a brain is that each brain uses what it already knows to “predict” or “believe” that something new will happen, then uses that information to help make it so.

“We used the idea that prior experience and belief affect the probability of future outcomes, such as taking an alternate route to work on Friday because you’ve experienced heavy traffic Tuesday, Wednesday and Thursday and believe strongly that Friday will be just as bad,” says Shadmehr. E-mail spam filters operate on a similar principle; they predict which key words are “probably” attached to mail you don’t want and “learning” as they go to fine tune what they exclude from your in-box.

The computer model, Shadmehr says, almost precisely duplicates the results of experiments that tested the ability of monkeys to visually track rapid flashes of light. Experiments using such rapid eye movements, or saccades, are a staple in studying how the brain controls movement.

Initially, the animal learner made large errors, but also stored the information about its mistakes in a memory bank so it could adapt and make more accurate predictions the next time around. Every time the learner repeated the task, it would sift through the prior knowledge in its memory banks and make a prediction on how to move, which in turn would also be memorized. While short term memory was periodically purged, repeated errors were transferred to a long term memory bank.

The computer learner was tasked with “looking” at a spot of light. Then all the lights were turned off. The spot of light was turned on again and the computer learner was again asked to look at that same spot. The learner’s speed and pattern in adapting its movements matched the experimental results of the monkeys almost perfectly. “We found that this Bayesian model can explain almost all of the phenomena we observe in regard to learning motor movements,” says Shadmehr.

Beyond possible use in helping stroke patients, the new tool might also be applied to better understand how we learn language, develop ideas and make memories. “How we learn to think operates under many of the same principles as how we learn to move,” Shadmehr says.


Source: Johns Hopkins Medical Institutions

Explore further: Microsoft CEO is driving data-culture mindset

add to favorites email to friend print save as pdf

Related Stories

Bright future for protein nanoprobes

Mar 18, 2014

(Phys.org) —The term a "brighter future" might be a cliché, but in the case of ultra-small probes for lighting up individual proteins, it is now most appropriate. Researchers at the U.S. Department of ...

Recommended for you

Microsoft CEO is driving data-culture mindset

10 hours ago

(Phys.org) —Microsoft's future strategy: is all about leveraging data, from different sources, coming together using one cohesive Microsoft architecture. Microsoft CEO Satya Nadella on Tuesday, both in ...

Enabling dynamic prioritization of data in the cloud

Apr 14, 2014

IBM inventors have patented a cloud computing invention that can improve quality of service for clients by enabling data to be dynamically modified, prioritized and shared across a cloud environment.

User comments : 0

More news stories

Simplicity is key to co-operative robots

A way of making hundreds—or even thousands—of tiny robots cluster to carry out tasks without using any memory or processing power has been developed by engineers at the University of Sheffield, UK.

Microsoft CEO is driving data-culture mindset

(Phys.org) —Microsoft's future strategy: is all about leveraging data, from different sources, coming together using one cohesive Microsoft architecture. Microsoft CEO Satya Nadella on Tuesday, both in ...

Floating nuclear plants could ride out tsunamis

When an earthquake and tsunami struck the Fukushima Daiichi nuclear plant complex in 2011, neither the quake nor the inundation caused the ensuing contamination. Rather, it was the aftereffects—specifically, ...

Patent talk: Google sharpens contact lens vision

(Phys.org) —A report from Patent Bolt brings us one step closer to what Google may have in mind in developing smart contact lenses. According to the discussion Google is interested in the concept of contact ...

Quantenna promises 10-gigabit Wi-Fi by next year

(Phys.org) —Quantenna Communications has announced that it has plans for releasing a chipset that will be capable of delivering 10Gbps WiFi to/from routers, bridges and computers by sometime next year. ...