As sensors proliferate, opportunities are emerging in the field of machine learning
Biological learning systems run the gamut from the lowly roundworm (Caenorhabditis elegans) with its 300 or so neurons, all the way up to the adult elephant brain, with its 200 billion neurons. Whether they're located in fruit flies or cockroaches, chimpanzees or dolphins, all neurons do the same thing: they process and transmit information. And the reason for this is the same across the biological board: To avoid danger and maximize success in sustaining and propagating themselves, all organisms must be able to sense the environment, respond to it accordingly, and remember those stimuli that indicate risks and rewards.
Learning, in short, is a prerequisite for the survival of individuals and species in the natural world. The same iron law, however, is becoming increasingly applicable to the world of man-made systems.
According to Dr. Volker Tresp, one of Siemens' top machine learning authorities and a computer science professor at Ludwig Maximillian University in Munich, there are three kinds of learning: memorization (such as the ability to remember facts); skills (such as the ability to learn to throw a ball); and abstraction (such as the ability to form rules based on observations). Computers, which are born whizzes in the first area, are rapidly catching on to the other two.
Take, for instance, the skill needed to produce a flawlessly even sheet of steel in a given thickness—an area in which Siemens has been a leader for over 20 years. "Here," says Tresp, "the simplest learning schema is to make a prediction, and then check to see if the output product meets the desired specification." Confronted with an output requirement for, say, a particularly high grade of steel, an automated rolling mill would take sensor data (composition, strip temperature, etc.) into account, estimate the required pressure based on previously learned information, and then adjust itself accordingly in real time in response to its measuring data until it achieved exactly the right pressure to get the desired thickness. "In a neural network-based learning system," explains Tresp, "this would be achieved by adjusting the relative weight matrix (see diagram) of all the factors that influence a given parameter, such as thickness."
Beyond memorization and the ability to optimize skills, artificial systems are increasingly being called upon to generalize or abstract the characteristics that make an individual item a member of a group. Optical character recognition (OCR), which has traditionally been used for high-speed postal sorting, is a case in point. Since approximately 1985, when this technology was first developed, accuracy has skyrocketed from single digits to over 95 percent for handwritten Latin alphabets and over 90 percent for Arabic handwriting. In fact, in 2007, Siemens' ARTread learning system won first place in the International Conference on Document Analysis and Recognition contest for OCR in Arabic. Given OCR technology's exceptionally high level of reliability, it is beginning to migrate to applications such as automatic license plate recognition and industrial vision.
What else will be possible in the future? Better performance and increasing numbers of sensors will open up great new opportunities, especially for industry. More and more data is becoming accessible locally and through networks. However, this flood of data has to be intelligently analyzed if it is to be useful.
These new deep learning methods use far more levels of artificial neurons than was previously the case. Each level handles a single level of abstraction of the material to be learned. Through the interconnection of numerous levels, the resulting insights are much more detailed than with previous artificial neural networks. Most of us carry an artificial neural network around with us, because such deeply layered neural networks are used for the voice recognition systems of all state-of-the-art Android smartphones. However, Tresp's team is going a step further by modeling mathematical knowledge networks encompassing up to 10 million objects.
These networks can make up to 1014 possible predictions regarding the relationships between these objects—a figure that corresponds more or less to the number of synapses in the brain of a grown person.