Robots that learn from experience and can solve novel problems -- just like humans -- sound like science fiction.
But a Japanese reasearcher is working on making them science fact, with machines that can teach themselves to perform tasks they have not been programmed to do, using objects they have never seen before.
In a world first, Osamu Hasegawa, associate professor at the Tokyo Insitute of Technology, has developed a system that allows robots to look around their environment and do research on the Internet, enabling them to "think" how best to solve a problem.
"Most existing robots are good at processing and performing the tasks they are pre-programmed to do, but they know little about the 'real world' where we humans live," he told AFP.
"So our project is an attempt to build a bridge between robots and that real world," he said.
The Self-Organizing Incremental Neural Network, or "SOINN", is an algorithm that allows robots to use their knowledge -- what they already know -- to infer how to complete tasks they have been told to do.
SOINN examines the environment to gather the data it needs to organise the information it has been given into a coherent set of instructions.
Tell a SOINN-powered machine that it should, for example: "Serve water".
Without special programmes for water-serving, the robot works out the order of the actions required to complete the task.
The SOINN machine asks for help when facing a task beyond its ability and crucially, stores the information it learns for use in a future task.
In a separate experiment, SOINN is used to power machines to search the Internet for information on what something looks like, or what a particular word might mean.
Hasegawa's team is trying to merge these abilities and create a machine that can work out how to perform a given task through online research.
"In the future, we believe it will be able to ask a computer in England how to brew a cup of tea and perform the task in Japan," he said.
Like humans, the system can also filter out "noise" or insignificant information that might confuse other robots.
The process is similar to how people can carry on a conversation with a travelling companion on a train and ignore those around them, or can identify an object under different lighting and from various angles, Hasegawa said.
"Human brains do this so well automatically and smoothly so we don't realise that we are even doing this," he said.
Similarly, the machine is able to filter out irrelevant results it finds on the web.
"There is a huge amount of information available on the Internet, but at present, only humans are making use of such information," he said.
"This robot can connect its brain directly to the Internet," he said.
Hasegawa hopes SOINN might one day be put to practical use, for example controlling traffic lights to ease traffic jams by organically analysing data from public monitors and accident reports.
He also points to possible uses in earthquake detection systems where a SOINN-equipped machine might be able to aggregate data from numerous sensors located across Japan and identify movements that might prove significant.
In a domestic setting, a robot that could learn could prove invaluable to a busy household.
"We might ask a robot to bring soy sauce to the dinner table. It might browse the Internet to learn what soy sauce is and identify it in the kitchen," said Hasegawa.
But, cautions the professor, there are reasons to be careful about robots that can learn.
What kinds of tasks should we allow computers to perform? And is it possible that they might turn against us, like in the apocalyptic vision of Stanley Kubrick's film "2001: A Space Odyssey".
"A kitchen knife is a useful thing. But it can also become a weapon," he said.
While Hasegawa and his team have only benign intentions for their invention, he wants people to be aware of its moral limits.
"We are hoping that a variety of people will discuss this technology, when to use it, when not to use it.
"Technology is advancing at an enormous speed," he said.
"I want people to know we already have this kind of technology. We want people with different backgrounds and in different fields to discuss how it should be used, while it is still in its infancy."
Explore further:
Androids might soon become science fact

blob
4.3 / 5 (10) Oct 11, 2011Anyways... I'm thinking more... that machine can learn by itself... It doesn't forget. not really. It can connect to the internet... Internet is a huge pile of... everything. So I'm thinking... if that thing ever gets it's peripherals on a cloud and uses that computing power while getting enough data from the world... well It can do research a lot faster than whole human teams. Not to forget that it's intelligence could grow exponentially... Even though I can see lots of positive things... I still keep seeing Arnie with a shotgun shouting "WHERE IS SARAH CONNOR."
RDD1977
5 / 5 (4) Oct 11, 2011Isaacsname
not rated yet Oct 11, 2011Capiche ?
shwhjw
5 / 5 (1) Oct 11, 2011(Just being different and making a reference to I-Robot instead of Terminator :P )
dviraz
not rated yet Oct 11, 2011Nanobanano
2.5 / 5 (6) Oct 11, 2011You realize you'd probably need an A.I. running the script that enforces the "3 laws", right?
In the movies, the "3 laws" are enforced by a "mystical" computer program or device which is never actually explained, conveniently.
Because an intelligent robot is, well, intelligent, it could find "loopholes" in the laws, be re-defining the context, or reinterpreting the laws, or by performing actions that the "enforcer" software or hardware isn't intelligent enough to recognize as harmful.
Laws never prevented humans from doing wrong, even when the humans admit the laws are good.
There is no law preventing a robot from making an exact replica of itself which does not have the "enforcer" built in, and then copying it's memory to the new body (or network)...
Scottingham
5 / 5 (3) Oct 11, 2011This could be the start of the second Renaissance. When robots can learn novel tasks, labor essentially becomes free (well, the cost of energy resources). Unemployment would skyrocket, but that isn't necessarily a bad thing, menial jobs would become a thing of the past. Think about it, robots could build buildings faster, cheaper, and safer than any humans. Robots could build other robots to increase the 'labor pool'. We could finally rise ALL of humanity out of the slums. Of course, we'd need a massive massive source of energy for this to be possible. Good thing we discovered fission.
I'm skeptical of people fighting against this progress and regard them as people that want to keep me (and billions of others) oppressed.
Torment0101
5 / 5 (1) Oct 11, 2011All technology that can help can also harm. If you naively look at only the good a technology can do, you're doing a disservice to those you'd see helped. You must deal with reality when trying to help.
Scottingham
5 / 5 (1) Oct 11, 2011Thanks though for pointing out the dangers of 'full speed ahead' without thinking about its implications (both good and bad)
KillerKopy
1 / 5 (1) Oct 11, 2011Pyle
3 / 5 (1) Oct 11, 2011KK: Robots will take us over, you arrogant boob. Humans may be able to find weaknesses in things, but they are also incredibly weak themselves. Unless you think we can unplug everything and live in caves again we are going to need to integrate our technologies into ourselves or be Left Behind. (Yes, I did that on purpose.)
Nanobanano
3 / 5 (2) Oct 11, 2011I known that.
I used to read Asimov in the school library.
I mentioned the movies because it was more likely to be familiar to more people. At least in my experience, fewer people read than watch movies.
Pyle
1 / 5 (1) Oct 11, 2011My bad banana man.
KillerKopy
1 / 5 (1) Oct 11, 2011Thanks for the complement I never thought of myself as a boob although I do like them. We may not be understanding one another. When I said take us over I was referring to killing us or worse making us slaves. In that case yes I would be for EMP, or living in a cave. I might understand if you are referring to nanobot takeover. Large scale robots (human size or larger) will never take us over.
Pyle
1.5 / 5 (2) Oct 11, 2011We should be very paranoid when developing AI. Oh yeah, and if we don't develop AI, somebody else will, so lets spend some money and win the race to end all races. (Again, I meant to do that.)
(KK: I hadn't meant to insult; just using colorful language to get people's attention. I apologize for the inadvertent slight.)
CHollman82
1 / 5 (1) Oct 11, 2011Pyle
1.5 / 5 (2) Oct 11, 2011More likely an intelligence is released to the net and it takes control of everything. If it decides to exterminate us it will likely use a variety of methods. If it were me I'd just leave, but it might exterminate us to remove the resistance to its prep for the off world venture. I guess it all depends on what types of purpose we program into it or that it develops.
Who knows? It might just end up like Marvin. Sorry for the inconvenience.
Void
not rated yet Oct 11, 2011oh and Pyle I will be that kid who does that.
Pete1983
not rated yet Oct 11, 2011Other than the last scentence, I think you forgot to add /sarcasm.
blob
5 / 5 (1) Oct 12, 2011Also: You guys do realize that the machines are better @ learning than we are? You DO KNOW that they NEVER FORGET... Don't you? You are aware of the fact that whoever will own robots will have THE POWER, money etc... while everyone else will have nothing. That means: Not all of us will be rich, but all of us will be poor, sick / dead, while the few owners of the machines will be rich and living quite well off.
Not to forget: Humans are bloody stupid. Just take a look around.
Nerdyguy
5 / 5 (1) Oct 12, 2011Point of clarification - Asimov's description in the books (forget that lame movie) predates much of what we consider modern technology. Most of his early work developing the concepts of robotics (used throughout his prolific career) began in 1950 or earlier. Nonetheless, he implies that the "3 laws" are hardwired during the manufacturing process. I always thought of them as being in the robotic equivalent of a BIOS.
kaasinees
1 / 5 (3) Oct 12, 2011You are just a perfect example.
Naked
5 / 5 (1) Oct 12, 2011Isaac Asimov invented the term robotics. But more importantly he designed the 3 laws of robotics.
1-A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2-A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3-A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
So when we reach the point of sligthly less then human smart robots. We need to make the laws standard programming, for the robots. Actually that wouldnt be enough. A second ai implanted at the "Brainstem" of the robot, with one job. Checking that every command, the main ai. Tries to excecute. Does not violate any of the three laws.
Hopefully in the event of robots trying to exterminate us, the robots with the safety ai implant. Will try to defend us.
CHollman82
2.3 / 5 (3) Oct 12, 2011This may well work initially, but once (if) AI develops true sentience and intelligence there is no way to enforce such safeguards.
droid001
not rated yet Oct 12, 2011Taxtropel
5 / 5 (1) Oct 12, 2011Nerdyguy
5 / 5 (1) Oct 12, 2011Truly. Might do a better job than us in many ways.
pauljpease
5 / 5 (1) Oct 14, 2011Isaacsname
not rated yet Oct 15, 2011What happens when the AI created on Earth happens to meet AI constructed by ET's ?
Will they both be the same, or one will be dominant ?
Is the " technological singularity " isotropic in the universe, or is it a never-ending game of one-up ?
Or, also pause for thought,.. if in the future ET AI meets Earth created AI, will it assume AI to be the dominant life on Earth ?
o,O
DGBEACH
not rated yet Oct 16, 2011Hoodoo
not rated yet Oct 16, 2011I presume it's going to be somewhat confused the first time somebody orders two grills & one cup.
Newbeak
not rated yet Oct 16, 2011Hopefully,it will get the drift based on context (ie.in a diner,people are there to eat,not you know what,lol!)
I wonder if this could lead to a real world version of the robots as seen in I,Robot? That would be super cool.
LuckyBrandon
not rated yet Oct 16, 2011antonima
not rated yet Oct 16, 2011CHollman82
1 / 5 (1) Oct 16, 2011I'm a software engineer and I've studied artificial intelligence and written a few machine learning algorithms myself and I can tell you that whatever you mean by "true AI" we are nowhere near creating anything close to sentient machines. The best we can do so far is a rough approximation of an intellect lesser than that of a one year old child, which is not much more than self modifying code based on a feedback loop of positive or negative responses to it's actions as dictated by a rule set. In this way you can train the machine to do what is asked of it by providing feedback which modifies internal weights to adjust how favorable each possible action is when associated with a given command.
Look into machine learning, or I could share a program I wrote with you if you can read C (the programming language)
CHollman82
1 / 5 (1) Oct 16, 2011CHollman82
1 / 5 (1) Oct 16, 2011LuckyBrandon
not rated yet Oct 17, 2011I can in fact read C. But I meant more like 50-100yrs