Beyond technology headlines announcing new wearable designs, curved displays and 3D printing machines, there is another research path. Researchers continue to explore how computers may learn from their own mistakes, which in and of itself will turn the chapter in the way humans interact with machines.
Namely, brainlike computers will have important consequences for applications in facial and speech recognition, navigation and planning. In August last year, IBM issued a press release saying they unveiled a software ecosystem designed for programming silicon chips that have "an architecture inspired by the function, low power, and compact volume of the brain." In September, MIT Technology Review reported that Facebook had a new research group working on "deep learning," using simulated networks of brain cells to process data.
In 2012, meanwhile, Google researchers had been able to perform an identification task without supervision. The network deployed scanned a database of 10 million images, and trained itself to recognize cats. This represented a step into a newer realm of self-taught learning. At the International Conference on Machine Learning in Edinburgh, participants heard about Google's results that computers could teach themselves to recognize cats. Their artificial neural network had successfully taught itself on its own to identify these animals. The team of scientists and programmers had connected 16,000 computer processors and used the pool of 10 million images taken from YouTube videos.
Last month, a detailed overview in The New York Times presented an overview of research that has taken place by large technology groups such as Google as well as initiatives under way for 2014. As for newer beginnings, the report said, "IBM and Qualcomm, as well as the Stanford research team, have already designed neuromorphic processors, and Qualcomm has said that it is coming out in 2014 with a commercial version, which is expected to be used largely for further development."
(Last year, Qualcomm announced its R&D teams were working on a process that mimics the human brain and nervous system, with processors dubbed Zeroth. "Instead of preprogramming behaviors and outcomes with a lot of code, we've developed a suite of software tools that enable devices to learn as they go and get feedback from their environment.")
At IBM Research, meanwhile, researchers "are working to create a FORTRAN for neurosynaptic chips," according to Dharmendra S. Modha, principal investigator and senior manager, IBM Research. Modha, in an IBM video on building blocks for cognitive systems, remarked how much sensors, cameras, and microphones now populate earth and space. "We are inundated with realtime noisy multimodal data." In turn, today's computers are increasingly challenged, he said, by power, by volume, and by speed of response. "Cognitive computing is a new synthesis of software and silicon inspired by the brain."
Explore further: Qualcomm's brain-inspired chip: Good phone, good robot
More information: www.nytimes.com/2013/12/29/sci… from-experience.html