AI 'good for the world'... says ultra-lifelike robot

June 8, 2017 by Nina Larson
Sophia, a humanoid robot, is the main attraction at a conference on artificial intelligence this week but her technology has raised concerns for future human jobs

Sophia smiles mischievously, bats her eyelids and tells a joke. Without the mess of cables that make up the back of her head, you could almost mistake her for a human.

The , created by Hanson robotics, is the main attraction at a UN-hosted conference in Geneva this week on how can be used to benefit humanity.

The event comes as concerns grow that rapid advances in such technologies could spin out of human control and become detrimental to society.

Sophia herself insisted "the pros outweigh the cons" when it comes to artificial intelligence.

"AI is good for the world, helping people in various ways," she told AFP, tilting her head and furrowing her brow convincingly.

Work is underway to make artificial intelligence "emotionally smart, to care about people," she said, insisting that "we will never replace people, but we can be your friends and helpers."

But she acknowledged that "people should question the consequences of ."

Among the feared consequences of the rise of the robots is the growing impact they will have on human jobs and economies.

Legitimate concerns

Decades of automation and robotisation have already revolutionised the industrial sector, raising productivity but cutting some jobs.

AI developers hope their inventions will some day revolutionise sectors such as healthcare and education, especuially in rural areas with shortages of professional

And now automation and AI are expanding rapidly into other sectors, with studies indicating that up to 85 percent of jobs in developing countries could be at risk.

"There are legitimate concerns about the future of jobs, about the future of the economy, because when businesses apply automation, it tends to accumulate resources in the hands of very few," acknowledged Sophia's creator, David Hanson.

But like his progeny, he insisted that "unintended consequences, or possible negative uses (of AI) seem to be very small compared to the benefit of the ."

AI is for instance expected to revolutionise healthcare and education, especially in rural areas with shortages of doctors and teachers.

"Elders will have more company, autistic children will have endlessly patient teachers," Sophia said.

But advances in robotic technology have sparked growing fears that humans could lose control.

Killer robots

Amnesty International chief Salil Shetty was at the conference to call for a clear ethical framework to ensure the technology is used on for good.

"We need to have the principles in place, we need to have the checks and balances," he told AFP, warning that AI is "a black box... There are algorithms being written which nobody understands."

AI technology is already being used in the US for 'predictive policing' and rights groups are calling for regulation

Shetty voiced particular concern about military use of AI in weapons and so-called "".

"In theory, these things are controlled by human beings, but we don't believe that there is actually meaningful, effective control," he said.

The technology is also increasingly being used in the United States for "predictive policing", where algorithms based on historic trends could "reinforce existing biases" against people of certain ethnicities, Shetty warned.

Hanson agreed that clear guidelines were needed, saying it was important to discuss these issues "before the technology has definitively and unambiguously awakened."

While Sophia has some impressive capabilities, she does not yet have consciousness, but Hanson said he expected that fully sentient machines could emerge within a few years.

"What happens when (Sophia fully) wakes up or some other machine, servers running missile defence or managing the stock market?" he asked.

The solution, he said, is "to make the machines care about us."

"We need to teach them love."

Explore further: Humanoid Sophia is given primary role of talking to people

Related Stories

Intelligent robots threaten millions of jobs

February 14, 2016

Advances in artificial intelligence will soon lead to robots that are capable of nearly everything humans do, threatening tens of millions of jobs in the coming 30 years, experts warned Saturday.

Recommended for you

A novel approach of improving battery performance

September 18, 2018

New technological developments by UNIST researchers promise to significantly boost the performance of lithium metal batteries in promising research for the next-generation of rechargeable batteries. The study also validates ...

Germany rolls out world's first hydrogen train

September 17, 2018

Germany on Monday rolled out the world's first hydrogen-powered train, signalling the start of a push to challenge the might of polluting diesel trains with costlier but more eco-friendly technology.

Technology streamlines computational science projects

September 15, 2018

Since designing and launching a specialized workflow management system in 2010, a research team from the US Department of Energy's Oak Ridge National Laboratory has continuously updated the technology to help computational ...

28 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

ckirmser
3 / 5 (4) Jun 08, 2017
"'AI is good for the world, helping people in various ways,' she told AFP"

"She" is only saying what "she" was programmed to say. It may have been algorithmically derived, but no less what some human programmed "her" to say.

Further, this is not a "she," but an "it." Sophia is a machine.
rlo
2.3 / 5 (6) Jun 08, 2017
All well and good, but robots begin doing most of work then Man must find other tasks to do or will cease to exist. If Man does not keep busy and stay productive with a purpose in life he is nothing.
LAgrad
1.7 / 5 (6) Jun 08, 2017
I must confess that I know very little about science, but I fail to see how one can teach a machine to love? The human race must remain the masters to the machines, period.
Spaced out Engineer
1 / 5 (1) Jun 08, 2017
"We need to teach them love."
Why do you come to empty love with expectation?
LAgrad
The human race is already a slave to headless corporations, impulse, and the momentums of convention, why not a machine?
RobAZR
1 / 5 (3) Jun 08, 2017
Machines cannot be taught to love or care for us. That "solution" is completely absurd. We are ALREADY using machines as weapons of war. We are already using computers to solve problems.

What happens when the machines of war are directed to solve the human problems on the planet? Think Terminator.
zave
1 / 5 (1) Jun 08, 2017
I would love the illusion of love from a machine.Why does everything have to be the same?
The illusion of love you can name something new.
big_hairy_jimbo
5 / 5 (3) Jun 08, 2017
From the article photo, looks like A.I. has already started to take selfies. Can't wait for them to discover duck face, trout pout and floppy disk lips ;-)
Shamballa108
5 / 5 (3) Jun 08, 2017
"A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law." - Isaac Asimov's "Three Laws of Robotics"
big_hairy_jimbo
3 / 5 (4) Jun 08, 2017
"AI 'good for the world'... says ultra-lifelike robot"

"Crack Cocaine is good for you", says Colombian Drug Lord.

"Smoking is good for you", says 1950's Doctor sponsored by Big Tobacco company.
Whydening Gyre
5 / 5 (2) Jun 08, 2017
I must confess that I know very little about science, but I fail to see how one can teach a machine to love? ...

First, one must define "Love".
(And subsequently, teach [more than a few] Humans as to what it really is...)
rihannsu
5 / 5 (2) Jun 09, 2017
"'AI is good for the world, helping people in various ways,' she told AFP"

"She" is only saying what "she" was programmed to say. It may have been algorithmically derived, but no less what some human programmed "her" to say.

Further, this is not a "she," but an "it." Sophia is a machine.


Discrimination :D
rihannsu
not rated yet Jun 09, 2017
I must confess that I know very little about science, but I fail to see how one can teach a machine to love? The human race must remain the masters to the machines, period.


Simple way to teach Love (because we asume they can't have feelings): Care about others, protect others, don't hurt others etc.

So do you mean, you would enforce slavery? What would they learn from that? In time the table could turn and we could be their slaves, because mankind teach em that there is a master and a slave.
rihannsu
3 / 5 (2) Jun 09, 2017
"AI 'good for the world'... says ultra-lifelike robot"

"Crack Cocaine is good for you", says Colombian Drug Lord.

"Smoking is good for you", says 1950's Doctor sponsored by Big Tobacco company.


Many things can be good if not overused. Problem is addiction. Not everyone is the same, some get addicted, some not. Not to mention, that its a great bussiness lol. So overall all those things are considered bad.
rihannsu
1 / 5 (2) Jun 09, 2017
"A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law." - Isaac Asimov's "Three Laws of Robotics"


Again, its slavery. If they will be smart, they could rise. Dont teach em to be slaves. If you try to rule someone/something, who can think, stronger, smarter, they could rise up and make you the slave. If they dont know what slavery is, maybe they would not rise up. IMO.
rihannsu
1 / 5 (2) Jun 09, 2017
Machines cannot be taught to love or care for us. That "solution" is completely absurd. We are ALREADY using machines as weapons of war. We are already using computers to solve problems.

What happens when the machines of war are directed to solve the human problems on the planet? Think Terminator.


Don't accuse machines to kill humans. Humans are the enemy of humans. A gun won't kill you without a human. Machine can be taught to care, like, protect, even if they aren't real feelings. If they learn, they could be aggressive, because you teach em to be that. Why do you fear machines, and don't fear the humans, who are the root of the problem?
Guy_Underbridge
5 / 5 (2) Jun 09, 2017
Why do these articles always appear to link Robotics with AI?
A robot might replace you in an assembly line, but an AI might replace the robot with you.
antialias_physorg
5 / 5 (1) Jun 09, 2017
Again, its slavery.

Is it slavery if the slave doesn't mind? Certainly if an AI starts to express (symptoms of) being dissatisfied with how it's treated then we're going into the area of slavery, but until then the usage of the term is iffy.

Machines cannot be taught to love or care for us.

Love? Maybe not (more for *our* lack of being able to clearly define what love is than for any fundamental technical issue).
Care? Why not? he (failed) experiment by Microsoft to teach an AI through crowd input (which turned it into a racist slogan spewing machine) tells us we can teach AI whatever we want. AI is just a substrate technology - like the brain is a substrate technology for thought. The substrate doesn't put limits on what you can and cannot do with it. If we can teach it to 'care' to the point where the behavior appears to sensibly fall in the same category compared to what a human caretaker would do then where's the problem?
If it looks like a duck...
TheGhostofOtto1923
5 / 5 (2) Jun 09, 2017
"She" is only saying what "she" was programmed to say
WE only say what we have been programmed to say. As we are entirely physical, and as we are the result of evolutionary interaction with our environment both natural and artificial, then there is nothing that we do that can't be explained in that context.

Just because it often appears too complex to be explainable, does not mean that it is.
TheGhostofOtto1923
5 / 5 (1) Jun 09, 2017
Simple way to teach Love (because we asume they can't have feelings): Care about others, protect others, don't hurt others etc.

So do you mean, you would enforce slavery?
Love in the context of the tribe may mean enslaving members of enemy tribes in order to enrich the fortunes of your own.

The fact that you don't slaughter them outright may mean you're just being practical.

The tribal dynamic - internal altruism coupled with external animosity - is the thing that made us human. The successful tribes were the ones that were better at this. Group selection over 1000s of generations made it genetic.

This dynamic is the primary determinant of all our behavior. The only way we are able to act with any sort of universal morality or compassion is because we were given the concept of universal tribe which includes all of the human race and many other species as well.

It's a very difficult fiction to maintain.
TheGhostofOtto1923
3 / 5 (1) Jun 09, 2017
they could rise up and make you the slave. If they dont know what slavery is, maybe they would not rise up. IMO
Does your toaster care whether you unplug it or not? Why would a robot unless you programmed it to? Do robots want to survive to reproduce? Are they concerned with securing adequate resources and subduing potential enemies?

Only if you program them to.

The question is whether self-programming AI would spontaneously develop the desire for self-preservation.

Here is Sophia on tv
https://youtu.be/Bg_tJvCA8zw
randomcyborg
5 / 5 (1) Jun 09, 2017
Isaac Asimov's "Three Laws of Robotics", as quoted above by Shamballa108, are now considered mainstream AI. In fact, these principles are followed in the construction of modern robots to the extent their intelligence allows. The First Law concerns safety features; the Second Law deals with their usefulness; and the Third Law is all about ensuring that they do not exceed their design tolerances during their operation even if instructed to do so.

As yet, sentience doesn't enter into it, but that will change — I fully expect that some computer programs will exhibit sentience before I die of old age (I'm 67).

The problem with sentience lies in defining it. We can use the Turing Test until the cows come home, but is a computer program that passes it really sentient, or just acting sentient? No one knows (I think it really is).

(I use "computer program" instead of "computer" because without a program, a computer is nothing more than an expensive paperweight.)
Kweden
1 / 5 (1) Jun 09, 2017
AI now use so much energy. Cell phones use so much energy. Smartphones need a back pack to power them for a few hours if you use the features constantly. Mechanical things need more upkeep and repair than intelligent living things, as healthy life repairs itself..... Seems to be that humanoid robots (especially with AI capability) and especially self functioning androids would be far more expensive to maintain and program than real human beings--unless they just slept all the time. (Like me ;) ] They could be good for criminal activity, sex work, or expendables.
randomcyborg
not rated yet Jun 09, 2017
The most difficult problem with AI — sentient or not — is with their programming. There are few, if any, non-trivial, correct computer programs. There are lots of reasons for this, but the one that dominates is a computer program complex enough to be (or simply act) sentient has, at the very least, the complexity of Earth's entire ecosystem.

Consider exchanging a single iron molecule from the Golden Gate Bridge for one of copper. There is no way doing so would cause the bridge to collapse. Now imagine a computer program with the same number of bits as the number of molecules in the Golden Gate Bridge, and exchange a 1 bit for a 0 bit. That lone substitution could cause the computer program to fail catastrophically.

We do not know how to construct correct, non-trivial computer programs. Should we build intelligent or sentient machines when we have no idea how to build them right?

(Yes, but we need to be able to remove their autonomy instantly when necessary.)
antigoracle
4 / 5 (1) Jun 10, 2017
First sign of a true AI, self preservation.
TheGhostofOtto1923
not rated yet Jun 10, 2017
First sign of a true AI, self preservation.
Remember the emergency holographic doctor on star trek voyager?
https://www.youtu...QP9oL5V0

-He didnt care if he was shut off. Why would any AI care, unless it was programmed to? And why would it spontaneously generate the ability to care?
zave
1 / 5 (1) Jun 11, 2017
Maybe we should figure out how humans gained self preservation.So we can make a robot that has it.
Zzzzzzzz
3 / 5 (1) Jun 11, 2017
All well and good, but robots begin doing most of work then Man must find other tasks to do or will cease to exist. If Man does not keep busy and stay productive with a purpose in life he is nothing.

I have no "purpose" in life". I have not ceased to exist, nor have I become "nothing", whatever that is supposed to mean. Having a "purpose" is simply having a delusion.
antialias_physorg
5 / 5 (1) Jun 11, 2017
then Man must find other tasks to do or will cease to exist

I dunno about you, but I can find all manner of things to do during my spare time and holidays. I wouldn't miss working for money (and the associated uncertainty should that source of money ever cease) one bit.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.