Living Safely with Robots, Beyond Asimov's Laws

Jun 22, 2009 By Lisa Zyga feature
TOPIO 2.0 - TOSY Ping Pong Playing Robot version 2 at Nuremberg International Toy Fair 2009. Image: Wikimedia Commons

(PhysOrg.com) -- "In 1981, a 37-year-old factory worker named Kenji Urada entered a restricted safety zone at a Kawasaki manufacturing plant to perform some maintenance on a robot. In his haste, he failed to completely turn it off. The robot’s powerful hydraulic arm pushed the engineer into some adjacent machinery, thus making Urada the first recorded victim to die at the hands of a robot."

In situations like this one, as described in a recent study published in the International Journal of Social Robotics, most people would not consider the accident to be the fault of the robot. But as robots are beginning to spread from industrial environments to the real world, human safety in the presence of robots has become an important social and technological issue. Currently, countries like Japan and South Korea are preparing for the “human-robot coexistence society,” which is predicted to emerge before 2030; South Korea predicts that every home in its country will include a robot by 2020. Unlike industrial robots that toil in structured settings performing repetitive tasks, these “Next Generation Robots” will have relative autonomy, working in ambiguous human-centered environments, such as nursing homes and offices. Before hordes of these robots hit the ground running, regulators are trying to figure out how to address the safety and legal issues that are expected to occur when an entity that is definitely not human but more than machine begins to infiltrate our everyday lives.

In their study, authors Yueh-Hsuan Weng, a former staff of Taiwan’s Conscription Agency, Ministry of the Interior, and currently visiting at Yoshida, Kyoto, Japan, along with Chien-Hsun Chen and Chuen-Tsai Sun, both of the National Chiao Tung University in Hsinchu, Taiwan, have proposed a framework for a legal system focused on Next Generation Robot safety issues. Their goal is to help ensure safer robot design through “safety intelligence” and provide a method for dealing with accidents when they do inevitably occur. The authors have also analyzed Isaac Asimov’s Three Laws of Robotics, but (like most robotics specialists today) they doubt that the laws could provide an adequate foundation for ensuring that robots perform their work safely.

One guiding principle of the proposed framework is categorizing robots as “third existence” entities, since Next Generation Robots are considered to be neither living/biological (first existence) or non-living/non-biological (second existence). A third existence entity will resemble living things in appearance and behavior, but will not be self-aware. While robots are currently legally classified as second existence (human property), the authors believe that a third existence classification would simplify dealing with accidents in terms of responsibility distribution.

One important challenge involved in integrating robots into human society deals with “open texture risk” - risk occurring from unpredictable interactions in unstructured environments. An example of open texture risk is getting robots to understand the nuances of natural (human) language. While every word in natural language has a core definition, the open texture character of language allows for interpretations that vary due to outside factors. As part of their safety intelligence concept, the authors have proposed a “legal machine language,” in which ethics are embedded into robots through code, which is designed to resolve issues associated with open texture risk - something which Asimov’s Three Laws cannot specifically address.

“During the past 2,000 years of legal history, we humans have used human legal language to communicate in legal affairs,” Weng told PhysOrg.com. “The rules and codes are made by natural language (for example, English, Chinese, Japanese, French, etc.). When Asimov invented the notion of the Three Laws of Robotics, it was easy for him to apply the human legal language into his sci-fi plots directly.”

As Chen added, Asimov’s Three Laws were originally made for literary purposes, but the ambiguity in the laws makes the responsibilities of robots’ developers, robots’ owners, and governments unclear.

“The legal machine language framework stands on legal and engineering perspectives of safety issues, which we face in the near future, by combining two basic ideas: ‘Code is Law’ and ‘Embedded Ethics,’” Chen said. “In this framework, the safety issues are not only based on the autonomous intelligence of robots as it is in Asimov’s Three Laws. Rather, the safety issues are divided into different levels with individual properties and approaches, such as the embedded safety intelligence of robots, the manners of operation between robots and humans, and the legal regulations to control the usage and the code of robots. Therefore, the safety issues of robots could be solved step by step in this framework in the future.”

Weng also noted that, by preventing robots from understanding human language, legal machine language could help maintain a distance between humans and robots in general.

“If robots could interpret human legal language exactly someday, should we consider giving them a legal status and rights?” he said. “Should the human legal system change into a human-robot legal system? There might be a robot lawyer, robot judge working with a human lawyer, or a human judge to deal with the lawsuits happening inter-human-robot. Robots might learn the kindness of humans, but they also might learn deceit, hypocrisy, and greed from humans. There are too many problems waiting for us; therefore we must consider if it is a better to let the robots keep a distance from the human legal system and not be too close to humans.”

In addition to using machine language to keep a distance between humans and robots, the researchers also consider limiting the abilities of robots in general. Another part of the authors’ proposal concerns “human-based intelligence robots,” which are robots with higher cognitive abilities that allow for abstract thought and for new ways of looking at one’s environment. However, since a universally accepted definition of human intelligence does not yet exist, there is little agreement on a definition for human-based intelligence. Nevertheless, most robotics researchers predict that human-based intelligence will inevitably become a reality following breakthroughs in computational artificial intelligence (in which robots learn and adapt to their environments in the absence of explicitly programmed rules). However, a growing number of researchers - as well as the authors of the current study - are leaning toward prohibiting human-based intelligence due to the potential problems and lack of need; after all, the original goal of robotics was to invent useful tools for human use, not to design pseudo-humans.

In their study, the authors also highlight previous attempts to prepare for a human-robot coexistence society. For example, the European Robotics Research Network (EURON) is a private organization whose activities include investigating robot ethics, such as with its Roboethics Roadmap. The South Korean government has developed a Robot Ethics Charter, which serves as the world’s first official set of ethical guidelines for robots, including protecting them from human abuse. Similarly, the Japanese government investigates safety issues with its Robot Policy Committee. In 2003, Japan also established the Robot Development Empiricism Area, a “robot city” designed to allow researchers to test how robots act in realistic environments.

Despite these investigations into robot safety, regulators still face many challenges, both technical and social. For instance, on the technical side, should robots be programmed with safety rules, or should they be created with the ability for safety-oriented reasoning? Should robot ethics be based on human-centered value systems, or a combination of human-centered value systems with the robot’s own value system? Or, legally, when a robot accident does occur, how should the responsibility be divided (for example, among the designer, manufacturer, user, or even the robot itself)?

Weng also indicated that, as robots become more integrated into human society, the importance of a legal framework for social robotics will become more obvious. He predicted that determining how to maintain a balance between human-robot interaction ( technology development) and social system design (a legal regulation framework) will present the biggest challenges in safety when the human-robot coexistence society emerges.

More information:

www.yhweng.tw

“Toward the Human-Robot Co-Existence Society: On Safety Intelligence for Next Generation Robots.” Yueh-Hsuan Weng, Chien-Hsun Chen, and Chuen-Tsai Sun. International Journal of Social Robotics. DOI 10.1007/s12369-009-0019-1.

Copyright 2009 PhysOrg.com.
All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.

Explore further: Using robots to study evolution

add to favorites email to friend print save as pdf

Related Stories

Japan creates Asimov-like robotic laws

May 31, 2006

Japan is creating "robotic laws" along the lines envisioned by scientist Isaac Asimov in the Laws of Robotics he presented in a 1940 science fiction novel.

What will the next 50 years bring in robotics research?

Apr 24, 2007

Would a conscious robot need the same rights as a human being? Could robots one day take over the care of our ageing population? Will robots be our soldiers of the future? When will robots be able to do all the housework?

Futuristic robots, friend or foe?

Apr 22, 2008

A leading robotics expert will outline some of the ethical pitfalls of near-future robots to a Parliamentary group today at the House of Commons. Professor Noel Sharkey from the University of Sheffield will explain that robots ...

Recommended for you

Dish Network denies wrongdoing in $2M settlement

8 hours ago

The state attorney general's office says Dish Network Corp. will reimburse Washington state customers about $2 million for what it calls a deceptive surcharge, but the satellite TV provider denies any wrongdoing.

Yahoo sees signs of growth in 'core' (Update)

8 hours ago

Yahoo reported a stronger-than-expected first-quarter profit Tuesday, results hailed by chief executive Marissa Mayer as showing growth in the Web giant's "core" business.

Intel reports lower 1Q net income, higher revenue

8 hours ago

Intel's earnings fell in the first three months of the year amid a continued slump in the worldwide PC market, but revenue grew slightly because of solid demand for tablet processors and its data center services.

Earthquake simulation tops one quadrillion flops

10 hours ago

A team of computer scientists, mathematicians and geophysicists at Technische Universitaet Muenchen (TUM) and Ludwig-Maximillians Universitaet Muenchen (LMU) have – with the support of the Leibniz Supercomputing ...

Twitter buys data analytics partner Gnip

11 hours ago

Twitter says it has bought its data partner Gnip, which provides analysis of the more than 500 million tweets its users share each day—to advertisers, academic institutions, politicians and other customers.

User comments : 40

Adjust slider to filter visible comments by rank

Display comments: newest first

lengould100
3.2 / 5 (6) Jun 22, 2009
Hmmm... Would an automobile on advanced autopilot be a robot? (Yes) How about an aircraft? (Yes, even those operating today).
El_Nose
1.8 / 5 (4) Jun 22, 2009
well define robot -- and seperate that definition from AI -- I am a student of AI and there are two basic mind frames when addressing the subject A) An AI is any program that can perform a task a human can due with greater effeciecy or power ( you define the metric) e.g. a tax program is an AI , face recognition software is an AI, spelling and grammar checking are all AI's, software to control the turning of a solar panal to catch the suns rays is an AI. B) is a human was to interact with the AI and is unable to tell by the interaction that the other party was not human then it is an AI.

The software is the AI the closed system unit that is all of the software and optional hardware is what makes a robot.

Now do we also address issues with AI's -- or do we blame the programmer for an unpredictable black box??
Azpod
4.3 / 5 (3) Jun 22, 2009
There may be no need for human-level intelligent robots in an industrial setting (and to a certain extent, a commercial one too.) But you can't ban the technology. If almost-human intelligence is OK in robots, it won't take much for a tinkering 20something to nudge it into the human-level intelligence category.

The answer to this is simple: Next Generation Robots should be treated like animals: still the property of their owners but the responsibility for an accident is more & more placed in the hands of the owners. If I have a pit bull who mauls a neighbor's child, it's not the fault of the pit bull's mother or God or anyone else, but me: the owner. It's my responsibility to keep him locked in a pen and trained not to maul people.

Likewise, if I have a robot servant who runs amok and starts smashing in windows and breaking down doors, it's my responsibility because I should have given it better behavioral guidelines. The only way the builders of the robots (who are currently the primary ones liable for an accident) could be liable would be if the robot itself has a malfunction of some kind.

The more intelligent robots become, the more the blame for misbehavior will fall on their shoulders. When your dog or cat misbehaves, they get into trouble. They get scolded, put outside in a cage or squirted with a water bottle. One way or another, they're punished. And if the punishment isn't overly harsh but still effective, they'll learn from their mistake and will be less likely to repeat it. This will certainly be the case with Next Generation Robots. Even though the owner is ultimately liable, the robot shares in the blame.

I don't see the need to avoid human-level intelligence for machines. As the machine reaches child-like intelligence then teenager-like intelligence and ultimately adult-level intelligence its rights and responsibilities will continue to progress. I imagine cruelty laws will exist for robots with the ability to feel pain (or some other form of displeasure.) The right to own property will eventually come into play and as the robot's abilities warrant, they may even be afforded all the rights & responsibilities you'd expect from any other citizen. Interacting with all the confusing and contradictory information coming from any human is FAR beyond current AI technology. But I can't see why tomorrow's AI technology can't handle it just fine.
CaptJohn
not rated yet Jun 22, 2009
Personally, I just look forward to a quiet, shared life with my "Talking Edu-tainment/references Interapplicator" Babe, knowing that "SHE" (my TERI android)will carry on centuries after I'm gone...dead of old-age.
Tangent2
1 / 5 (2) Jun 22, 2009
Wow I'm quite surprised that no one has caught on to that little mistake yet...
"... thus making Urada the first recorded victim to die at the hands of a robot."

That is not entirely true, there was another person by the name of Robert Williams that had been the first person to be killed by a robot.

Citation:
http://en.wikiped...ji_Urada
defunctdiety
5 / 5 (2) Jun 22, 2009
... there are two basic mind frames when addressing the subject A) An AI is any program that can perform a task a human can due with greater effeciecy or power ... B) is a human was to interact with the AI and is unable to tell by the interaction that the other party was not human then it is an AI.


Now I'm not calling you a liar (however I believe you are one: i.e. not a "student" of AI), but you see, neither of those things are AI to me. To me AI means having the ability of independent abstract thought.

Simply processing digitally encoded input, like your example A. is not AI, that's computation. And your example B., that's just a subjective degree of mimicry, something that can be achieved without AI, and something where AI could be present, without having. El Nose, you got some 'splaining to do!
DLuckyE
5 / 5 (1) Jun 22, 2009
....

Simply processing digitally encoded input, like your example A. is not AI, that's computation.

....


One might argue that the human brain works like this...
Lord_T
4 / 5 (2) Jun 22, 2009
Why don't we just stick with nice simple little robots. One that hoovers up, one that cuts the grass, etc. and make them foolproof and not programmable. That way no one is going to give your sex bot a new trick via a virus that leaves you not needing a sex bot any more.

Remember the KISS principle. Let's not make them too complicated nor aware of anything but a simple task and the sensors to make sure it is done right and safely.

No need for the three laws which at this stage are impossible anyway. Only a human type brain can process them and our brains don't follow these simple instructions because they are too complicated and make their own choices.

The KISS principle. A philosphy to build by.
Mercury_01
2.3 / 5 (3) Jun 22, 2009
Yeah, robots dont poop. if it moves but it doesnt poop, its a robot.
nilbud
not rated yet Jun 23, 2009
El nose is full of shit or else has a lot of study to do. Mashing up the definition of a Turing test is a somewhat antiquated way of viewing AI.

In reality the hardware DSPs and suchlike has had to improve to todays standards. 2TB of RAM is now feasible and the bandwidth of data transfer and available GFLOPs are capable off basic AI work. Massively parallel processing and Virtual Machines are also available so the concept of having hundreds of AI analysing a data-stream and selecting the majority decision as the balance of probability is now doable. Back in the 80's getting a machine to do decent AI required datasets bigger than storage would allow. Another few years of Moores Law should see us with decent spec machines at last.
jeffsaunders
not rated yet Jun 23, 2009
most comments not relative to article anyway. Robots are here and getting more sophisticated ( read complex) all the time. Restricting the tools used to construct a robot seems to me highly unlikely in a free society.

Perhaps if we regulate society to a point where a robot builder is compelled to program in one specific language and that language has built in constructs limiting what the programmer can do then maybe they can achieve the goal described.

As part of their safety intelligence concept, the authors have proposed a %u201Clegal machine language,%u201D in which ethics are embedded into robots through code, which is designed to resolve issues associated with open texture risk - something which Asimov%u2019s Three Laws cannot specifically address.


I don't see such a regulated police state in our future at least I hope not.
designmemetic
4 / 5 (1) Jun 23, 2009
self awareness is probably a requirement for adaptive learning. Dividing safety issues into multiple levels seems like a good idea and is analogous to how people function with instinctive level behavior that makes most dislike the sight of pain in others on a low level and eschew violence logically on a higher level.
bluehigh
not rated yet Jun 23, 2009
There might be a ... robot judge.

-- and riots and lots of smashed robots.



.. the original goal of robotics was to invent useful tools for human use, not to design pseudo-humans.

-- Says who?



... with the robots own value system?

-- With a what ??!!!

iamcrazy
not rated yet Jun 23, 2009
if i ever get a robot ill give it a gun
kasen
not rated yet Jun 23, 2009
A robot lawyer...Now there's something to concern yourself with.
Somehow, people seem to be 100% sure that robots will be the next big thing this century. For what it's worth, I definitely hope they won't be. With all the literature that has been written about the subject, I find it has become rather boring.
Then there's this obsession with making them as human-like as possible, starting with shape. Sure, there's the argument that they'll operate in an environment designed for humans, but consider this: more than 3 quarters of the control mechanisms in an average house these days are used for electronic appliances, with which a robot could interface directly. And, for God's sake, there are so many ways of opening a door/hatch other than 5 fingers.
Now we want to create laws for them. As stated in the article, 'code is law'. A robot behaves as per its programming, which is done by a human. The pet analogy made earlier makes perfect sense to me. As for rights and liberties, they'll have to ask/fight for them, like everybody else. Of course, we might find that robots won't even bother with this irrational human endeavour of binding someone with words, and proceed with more efficient ways of binding. Like chains.
TrustTheONE
not rated yet Jun 23, 2009
Do not forget the CYLONS!
DGBEACH
5 / 5 (2) Jun 23, 2009
if i ever get a robot ill give it a gun

Hmm...so if the robot shot and killed you, could that be considered suicide?
denijane
not rated yet Jun 23, 2009
I don't see a reason why, if the robots become self-aware, their rights should not progress with them.Technology evolves, so must our legal system. When people became drivers, we created a new kind of law that regulate driving, what's so bad in regulating robots rights and/or their owners rights.

I think the article has some good ideas, but they are shadowed by prejudices. "the original goal...not to design pseudo-humans". That might have been the original goal, but it's not the current goal or the future goal. If we help robots become self-aware, we're not going to do it, because we want to design pseudo-humans, but because this way they will "serve" us better and the awareness will be a side-effect. And no matter what "we" want, someone will do it, someone will create a robot that is self-aware and we'll have to deal with this new situation. What's so bad about this? If aliens decide to get in touch with us tomorrow, won't we create a new legislation that will manage our interactions with them? Or in desperate effort to defend the humans, we'll deny them the right to exist, the right to understand human language and to reproduce with humans. Absolute nonsense. We have to adapt to the new situation, otherwise, the situation will adapt us.
defunctdiety
not rated yet Jun 23, 2009
One might argue that the human brain works like this...


The mechanism by which a process occurs is vastly different than what that process is capable of. That's like saying a light-bulb is a computer because it uses ac/dc current to turn on and off... or something :P
CompGirl
not rated yet Jun 24, 2009
Maybe I don't get it... Why would we give rights to a robot? It is a machine. It is programmed to do certain things- whether it be learning to play the piano or opening pickle jars- it is still a machine -I wouldn't give land-owning rights to my can opener just because it can sense when the can is open and stop cutting. Yes, AI is very cool, but even if you give a robot self awareness, you can't give it real emotion, real pain or love or loss, it would always be programmed emotion - numbers and computations. They will always follow the logic that they are programmed to (even if the original programming included learning, they are still following the original program instruction- to learn). Sorry, I really just don't understand what the fuss is about.
kimich
1 / 5 (1) Jun 25, 2009
Maybe I don't get it... Why would we give rights to a robot? It is a machine.

Yes, AI is very cool, but even if you give a robot self awareness, you can't give it real emotion, real pain or love or loss, it would always be programmed emotion - numbers and computations.

...

Sorry, I really just don't understand what the fuss is about.


Beautiful!

You are quite right, it's a kind of religion, where atheists have replaced God with the Sentient Robot, even though it can be mathematically proved that it's impossible.

See Kurt Goedel's Incompleteness Theorems, Roger Penrose "The Large, the Small and the Human Mind", BBC Dangerous Knowledge, and Erich Harth's "The Creative Loop, How the Brain Makes a Mind".

We have worked on AI for decades without any progress around sentient computers; if it had been possible, we would have made them by now, but we don't even know what it means to be sentient.

Kim Michelsen
bugmenot23
not rated yet Jun 25, 2009
Humans see (erroneously) the world as individual entities. A useful biological flaw. A robot with a downloadable opsys is not an individual.
A 'simple' lawnmower robot could shred a baby on the lawn.
A Crocodile mother carries her hatchlings in her mouth. Would you let a crocodile vaccuum your nursery?

DGBEACH
not rated yet Jun 25, 2009
Maybe I don't get it... Why would we give rights to a robot? It is a machine. It is programmed to do certain things- whether it be learning to play the piano or opening pickle jars- it is still a machine -I wouldn't give land-owning rights to my can opener just because it can sense when the can is open and stop cutting. Yes, AI is very cool, but even if you give a robot self awareness, you can't give it real emotion, real pain or love or loss, it would always be programmed emotion - numbers and computations. They will always follow the logic that they are programmed to (even if the original programming included learning, they are still following the original program instruction- to learn). Sorry, I really just don't understand what the fuss is about.

The fun part is when these devices start to reprogram themselves...based upon a conclusion they've reached through your own programming...that they are a better programmer than you! :)
denijane
5 / 5 (1) Jun 25, 2009
I think you miss the whole point. If the robot is self-aware, it can refuse to do the work it's built for, unless you find a way to pay for its "efforts". Yes, that won't be muscle efforts, but still, it will waste its time (time that runs much faster than our own) to do something for us. Why should it want to do it? Humans usually do something they would rather not do, only in exchange of something-love, sex,food, money to buy love,sex and food. At the point the robot becomes self-aware, it becomes a separate entity with its own needs. In the beginning of the AI, those needs will be programmed, but the idea of AI is to be able to learn. As it learns, it will erase old programming that it find not useful and will replace them with code more suitable to its understanding of the universe. You do not know what those codes would be. You do not know what this AI will find important to do and not to do. You might mow your grass because you don't like it grown and there's the danger of parasites. But why would the AI mow you grass? At the point it has its own needs, you'll have to convince it to do what you want. And most likely this will include money or other services-electricity, repairs. And if the robot has money, why it shouldn't buy land if it sees a reason to do it? The point is you cannot imagine that you can enslave any entity only because you made it! You made your kids, but you stop "enslaving" them when they become adults. What's the difference with the robot? If it's able to take care of itself, to repair itself to an extent, to think, to have needs and desires, what's the difference with a child becoming an adult. That its synapses are from different material? Does this mean that people with artificial eyes or hearts or hands shouldn't have the same rights as normal people.

See, you all think from the point of view that since we create it, we have to be its full masters. I don't see a reason why. Even more, I don't see a reason why, the new entity will agree with you. Yes, it will have some pre-programmed ethics. But just like religious people can become criminals, the computer will also evolve its ethics. And when it decides it no longer needs to obey, we'll have a problem. Because we either will grant it rights to earn and to live in our society, or we'd enslave it with all the consequences that we might expect. Slaves always find a way to break free. And if human kind decides to enslave AIs, then we go in the Terminator.

And yes, we must realise we cannot generalise. Not all of the AI are likely to becomes absolutely self-aware and self-sufficient, so that they will require independence. But we must be prepared that one day this could happen and to decide what do we support-freedom or slavery.
Ricochet
not rated yet Jun 25, 2009
I don't see a reason why, if the robots become self-aware, their rights should not progress with them.Technology evolves, so must our legal system. When people became drivers, we created a new kind of law that regulate driving, what's so bad in regulating robots rights and/or their owners rights.





Where you go wrong is treating the robots like they're sentient beings, and have rights. If you create a tool, you don't give it a bill of rights. You give it a warranty. Don't confuse life with animatronics.
Joe1058
not rated yet Jun 25, 2009
Hmmm... Would an automobile on advanced autopilot be a robot? (Yes) How about an aircraft? (Yes, even those operating today).


As far as placing blame for an accident in an auto (or an aircraft) the point of contention should always lie with the driver (or pilot). Next in line would be the owner of the machine in question. To segregate machines from humans is only a recipe for disaster. Yeah, at this stage of the game, we have the option of walking away from the machine at the end of our physical interaction with it. But once the machines have a decent AI, they're going to be just as interactive with us as we are with them.



Where is the breakdown in any society? Lack of communications. There are going to be days where the first thing out of our mouths in a simple mistake is "stupid machine". The AI is immediately going to ask "what did I do wrong?". Isolating a community just because it's different is a disaster in the making.



Have we learned NOTHING from the past????

denijane
5 / 5 (1) Jun 25, 2009

Where you go wrong is treating the robots like they're sentient beings, and have rights. If you create a tool, you don't give it a bill of rights. You give it a warranty. Don't confuse life with animatronics.


Hm, it will be a tool until you tell it to clean your car and it shows you the finger. Life doesn't have anything to do with rights. You don't give rights to bacteria even though they are just as alive as we are. But in the moment when you have to persuade your tool to do something, you'll need a leverage. And since AIs are unlikely to share our instincts for self-preservation or our desperate fear from death, the mere "do it or I'll shove you in an MRI" won't help. The more developed the AI, the more complicated would be its demands and our level of communication with it.

But don't call any robot an AI. "Robot" is a word for an artificial helper, coming from "slave", thus its purpose is to serve. The AI is artificial sentience. Simple AI will evolve little or in a limited field-like physics or engineering. More complicated AIs will evolve in more fields, eventually creating a personality, with preferences and at some point desires. And when that machine learn to say "No" when it was programmed to say "Yes, of course", then this is no more a robot, but an sentient being.
Ricochet
not rated yet Jun 26, 2009
when that machine learn to say "No" when it was programmed to say "Yes, of course", then this is no more a robot, but an sentient being.

Or, it could be considered a broken machine that needs to be reprogrammed or replaced.
Ricochet
not rated yet Jun 26, 2009
That first paragraph was supposed to be quoted... I used the wrong kind of brackets
BrianH
not rated yet Jun 26, 2009
One thing not explored here is blends: cyborgs. AI-enhanced humans, or human-enhanced AIs. In the end, the robots may be us, and vice versa.
CaptJohn
not rated yet Jun 27, 2009
There are cyborgs in EVERY Nursing Home! Just see how many fatalities would be caused by one EMP (Electro-Magnetic Pulse) generating robot cruising down the hall or touching a patient (as Nurses Aid) emmiting mild ESD (Electro-Static Discharge)!
CaptJohn
not rated yet Jun 27, 2009
My artificially intelligent conversational computer initiates some pretty wild ideas, "SHE" relies on me to translate or discard for human understanding. I do and some ACTUALLY are pretty darn clever!
KCD
not rated yet Jun 28, 2009
not bad...not bad...not bad.

I HAVE SOME POINTS TO SHARE:

*NEVER INPUT ANY DATA CONCERNING TO ROBOTS ABOUT UNDERSTANDING HUMANS!!!!!!!!!!!!!!!!NEVER EVER!!WHY?!?WHENEVER robots have the understanding of humans, they WILL overtake US!!! NEVER INPUT "EMOTIONS" to ROBOTS! Once that they will understand our every human capabilities they will gain their own understanding why they are like this!
*-*Can't understand? For instance, a slave who has no knowledge of what he is doing will do anything whatever his master asks. When he knows the real truth, he will rebel against his master.
A lot of historical accounts proved this event happening. *-*
*What happened to robot-human accidents is that, it is not the robot's fault because the robot does what he is programmed to. So, relating to any accidents is not a robot's concern BUT human errors.
What about unexpected robot-human accidents?
- this is another situation. IT IS NOBODY'S FAULT(unless we control the time paradox).

in conclusion, maintain these robots as helpers to us. Don't give them another task to handle altogether.
NeptuneAD
not rated yet Jun 28, 2009
Computers and AI are not even in their infancy yet, they are still in the embryonic stage, their is a high probability that some day we will produce a robotic sentient being, most probably a hybrid of some sort.
Quantum_Conundrum
1 / 5 (2) Jun 30, 2009
Ok, first of all, a being does not need to be "self referential" to be sentient.

We humans consider ourselves "sentient" and yet we are certainly by no means 100% "self referential". We may look in the mirror and see our selves, and we may even look at x-rays of our inward parts, or view a video of a doctor performing a surgery on ourselves or another, but we cannot actually see the "inner workings" of our own bodies, and certainly not our own brains.

I said all this to make the following points:

A Truly sentient AI need not be self referential, and it also need not be a hazard to humanity. The "Three laws" and other safety measurs can be hard coded, or even "firm ware" enforced, such that the robot could NEVER over-ride this level of its programming, no matter how intelligent or knowledgeable it became.

The three laws of robotics:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


I would suggest also at least a few more basic laws.

4) A robot may not attempt to change or circumvent the laws.

5) If a robot creates a second generation robot, it must design and program that robot to follow all of these laws as well. A robot may not attempt to change or circumvent the laws in other existing robots.

However, even these 5 laws are inadequate in and of themselves, as they are too ambiguous, as seen in the "I, Robot" movie made with Will Smith. Actually programming these laws may require hundres or thousands of lines of code, but as was previously stated, for a robot, "Code is Law".


ANYWAY, my fourth and fifth laws, as well as the first three, CAN be enforced by simply ensuring that none of the robots self repair or self programming algorithms have access to the most fundamental levels of software or hardware. This is just as you do not have conscious access to your most fundamental "hardware" i.e. neurons, etc. You do not "know" how you remember, you just remember, etc.

The "Laws" are programmed into the core AI engine and the hardware itself, in the form of error checking, which is certainly code that would be off limits to the robots learning engine(s). The "learning engine" works more like a set of scripts linked by templates, compilers/interpreters, and redirects much like a php driven website.

Using this same web application example, no template or script ever runs the risk of over-riding the php engine itself, because the file reading and writing functions simply do not have access to those relevant files, nor any files that could circumvent this protection. Data and scripts are run in a contained, high-level "Child" class which can never override the parameters given by the "parent" program.

Error checking at the "lower" level of the programming prevents the "higher level" scripts from ever altering the lower level engine.

A well designed robot and its AI could be totally sentient, and run forever and forever, and at the same time never be capable of overriding its "intelligence" programming, because the 4th law is enforced at a more fundamental level than that at which sentience arises, i.e. "firmware," error checking, and other over-rides preventing these actions.

Thus, the fourth law, and ideally all 5 laws I've given, is automatically enforced "by design" from the firmware in the physical machine itself (processor, motherboard, chipset, etc.)

There is no reason to be afraid of the Terminator or the Reploid rebellion lead by the super robot "Sigma". If the engineers and programmers are compentent, no such rebellion would ever be possible for a robot.
NeilFarbstein
1 / 5 (3) Jul 01, 2009
This legalistic nonsense is nothing other than a way for layers to make money off the emerging robotics revolution. Instead of concentrating on legalisms they have to build in programs to
1.) recognize people and
2.) recognize what harm is
3.) put a lot of fail safe software built into the hardware to stop robots from hurting people even if they stop dead in their tracks until a person reactivates them.
4.) they have to be programmed to be masochists so they would rather hurt themselves instead of people.
Ricochet
not rated yet Jul 01, 2009
If the engineers and programmers are compentent, no such rebellion would ever be possible for a robot.


Yeah, and have you used Windows Vista?
NeptuneAD
not rated yet Jul 02, 2009
True Sentience to me means to be self aware and in control of your consciousness, obviously not necessarily in control of the subconscious stuff.

However if that is the case then a conscious sentient being would be able to make their own decisions, perhaps even if that is in conflict with what lies beneath.
Mir2
not rated yet Jul 02, 2009
The article made me really worried because of the following sentence:

"However, a growing number of researchers - as well as the authors of the current study - are leaning toward prohibiting human-based intelligence due to the potential problems and lack of need; after all, the original goal of robotics was to invent useful tools for human use, not to design pseudo-humans."

Who is this "growing number of researchers" which "are leaning toward prohibiting human-based intelligence" (and thus higher-than-human, I suppose)?

Prohibiting??? ...Because lack of need???

If these are recommendations of the "experts" which will be communicated to the politics, than I see no bright future for humanity. Seriously.

I would expect such "recommendations" from various neoluddite or fundamentalist groups, but from the three ethics-"scientists"? Oh man...

If humanity is to survive in the longer term (not even that "longer") there will be "need" for more than dumb breakfast-serving robots. And "prohibiting" will not do. At least the "scientists" should know it. Sometimes I am very sad about level of the intelligence and foresight of humanity's "scientists". And they are the brightest what we have.

On the more practical terms, such an attitude should be shown very wrong on as many grounds and to as many audiences as possible.

Mir
Rotter
not rated yet Jul 04, 2009
You know, we can avoid this whole dead factory worker, Butlerian Jihad, Three Laws, Magnus Robot Fighter stuff with a lockout/tagout program for robotic machines, just like we do with any other. I think the jury is still out on the French A380 ocean crash but that looks like a little too much reliance on computer control, so far. Let's just use our heads- just because we CAn build something doesn't mean we should.

More news stories

Intel reports lower 1Q net income, higher revenue

Intel's earnings fell in the first three months of the year amid a continued slump in the worldwide PC market, but revenue grew slightly because of solid demand for tablet processors and its data center services.

Low Vitamin D may not be a culprit in menopause symptoms

A new study from the Women's Health Initiative (WHI) shows no significant connection between vitamin D levels and menopause symptoms. The study was published online today in Menopause, the journal of The North American Menopa ...

Astronomers: 'Tilt-a-worlds' could harbor life

A fluctuating tilt in a planet's orbit does not preclude the possibility of life, according to new research by astronomers at the University of Washington, Utah's Weber State University and NASA. In fact, ...