What happens when the robots sound too much like humans?

May 9, 2018 by Matt O'brien
What happens when the robots sound too much like humans?
Google CEO Sundar Pichai speaks at the Google I/O conference in Mountain View, Calif., Tuesday, May 8, 2018. Google put the spotlight on its artificial intelligence smarts at its annual developers conference Tuesday, where it announced new features and services imbued with machine learning. (AP Photo/Jeff Chiu)

Artificial intelligence has a new challenge: Whether and how to alert people who may not know they're talking to a robot.

On Tuesday, Google showed off a computer assistant that makes convincingly human-sounding phone calls , at least in its prerecorded demonstration. But the real people in those calls didn't seem to be aware they were talking to a machine. That could present thorny issues for the future use of AI.

Among them: Is it fair—or even legal—to trick people into talking to an AI system that effectively records all of its conversations? And while Google's demonstration highlighted the benign uses of conversational robots, what happens when spammers and scammers get hold of them?

Google CEO Sundar Pichai elicited cheers on Tuesday as he demonstrated the new technology, called Duplex, during the company's annual conference for software developers. The assistant added pauses, "ums" and "mmm-hmms" to its speech in order to sound more human as it spoke with real employees at a hair salon and a restaurant.

"That's very impressive, but it can clearly lead to more sinister uses of this type of technology," said Matthew Fenech, who researches the policy implications of AI for the London-based organization Future Advocacy. "The ability to pick up on nuance, the human uses of additional small phrases—these sorts of cues are very human, and clearly the person on the other end didn't know."

Fenech said it's not hard to imagine nefarious uses of similar chatbots, such as spamming businesses, scamming seniors or making malicious calls using the voices of political or personal enemies.

"You can have potentially very destabilizing situations where people are reported as saying something they never said," he said.

Pichai and other Google executives tried to emphasize that the technology is still experimental, and will be rolled out cautiously. It's not yet available on consumer devices.

"It's important to us that users and businesses have a good experience with this service, and transparency is a key part of that," Google engineers Yaniv Leviathan and Yossi Matias, who helped design the new technology, wrote in a Tuesday blog post . "We want to be clear about the intent of the call so businesses understand the context. We'll be experimenting with the right approach over the coming months."

It's unclear how the company will navigate existing telecommunications laws, which can vary by state or country. Google didn't immediately return a request for comment Wednesday on how it plans to seek the consent of people called by its bots.

One co-owner of a San Francisco Bay Area barbershop patronized some Google employees was a little creeped out by the privacy implications.

"It seems like something that would be helpful for our clients," said Katherine Esperanza, co-owner of the Slick & Dagger barbershop in Oakland, California. Esperanza, however, wondered if the shop would be able to block the calls, and said it "begs the question about whether the conversation is recorded and if the recipient of these automated calls could be aware that they're being recorded."

Anti-wiretapping laws in California and several other states already make it illegal to record phone calls without the consent of both the caller and the person being called. The Federal Communications Commission has also been grappling with rules for robocalls , the unsolicited and automatically-dialed calls made by telemarketers.

Such calls are typically prerecorded monologues, but more businesses and organizations are employing machine-learning techniques to respond to a person's questions with a natural-sounding conversation, in hopes they'll be less likely to hang up.

Explore further: At a glance: How new Google features tap digital smarts

Related Stories

Google lets users move numbers to Voice

January 26, 2011

(AP) -- Google's Voice calling application is adding a long-promised feature: the ability to move a phone number from a cell phone or landline to Google's service.

Recommended for you

Team breaks world record for fast, accurate AI training

November 7, 2018

Researchers at Hong Kong Baptist University (HKBU) have partnered with a team from Tencent Machine Learning to create a new technique for training artificial intelligence (AI) machines faster than ever before while maintaining ...

48 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

TheGhostofOtto1923
not rated yet May 09, 2018
How will we know? Efficiency, clarity, dependability, unwavering attention... utter excellence. The absence of automated menus on help lines, where you get a live rep right off the bat who happens to be AI.
https://www.youtu...YuvU-zYI

Thats how we will know the difference.
to trick people into talking to an AI system that effectively records all of its conversations?

https://www.youtu...wNRGAQhc

-I wonder when was the last time this author called his credit card company?
rrwillsj
5 / 5 (1) May 09, 2018
When you are looking at a graph of human intelligence... You just gotta wonder if the down sloped curve towards the low end, to the altright of the average, if they could pass a Turing Test?
Eikka
not rated yet May 10, 2018
How will we know? Efficiency, clarity, dependability, unwavering attention... utter excellence


More like unwavering and obtuse adherence to rules that aren't always well thought-out, weird edge case glitches because the system wasn't trained on enough variables, and bugs or other "roughness" resulting from budgeting and engineering the systems to be "good enough".

-I wonder when was the last time this author called his credit card company?


The difference is that while everyone might be recording their own calls independently, those recordings are internal to the companies and not accessible elsewhere, whereas Google's voice automation services to other companies has the "unintended" consequence of gathering a huge central database of telephone transactions from all over the society. In essence, you can't even call anywhere without getting tracked and profiled by Google.

That kind of surveillance machinery would have been Stasi's wet dream.
TheGhostofOtto1923
not rated yet May 10, 2018
More like unwavering and obtuse adherence to rules that aren't always well thought-out
Did you ever get a helpline bimbo who cant think at all? Her rules are the SAME as AI rules but AI is actually capable of remembering them.
weird edge case glitches because the system wasn't trained on enough variables
?? AI has unlimited potential, unlike helpline bimbos in argentina.
and bugs or other "roughness" resulting from budgeting and engineering the systems to be "good enough"
If these exist they are only delays. Soon even cheap systems will be superior. You will have a personal assistant that will gradually learn to do your job better than you, plus receive constant upgrades and lessons learned from its community.

And once incorporated they will never be forgotten or disregarded, unlike what you have already dropped from your last irrelevant CE course.
TheGhostofOtto1923
not rated yet May 10, 2018
Plus no lunchbreaks, no vacations, no sick leave, no workplace drama, no entitlements. Only continuous improvement.

Obsolete software - no disposal costs. Environmentally friendly.
TheGhostofOtto1923
not rated yet May 10, 2018
unintended" consequence of gathering a huge central database of telephone transactions
Well if you're concerned perhaps you might want to consider purchasing a true AI PDA that monitors your voice interactions for risky content and corrects them in realtime per your own preset criteria. Like a voice spellcheck with millisecond delay. Radio talkshows already have primitive human versions of this.

After awhile you could let it interact entirely on its own, taking care of mundane communications like calling argentine helpline bimbos. In Spanish no less.

IOW learning to be a far better version of you than you.
Eikka
not rated yet May 11, 2018
Did you ever get a helpline bimbo who cant think at all? Her rules are the SAME as AI rules but AI is actually capable of remembering them.


When you go through an automated helpline and press zero to talk to a person, what you're really getting is the lowest paid intern who doesn't have any idea, yet their task is to figure out who you really should be calling instead.

If you go through the phone menu system, assuming the designer was half-competent, or check the company website for the actual personnel instead of calling the generic helpline, you're infinitely more likely to get proper service.

What the AI does here is just replace the intern with a machine that still hasn't got a clue and is just going through a script to get you to the person you should be calling in the first place.
Eikka
not rated yet May 11, 2018
?? AI has unlimited potential


You're really turning AI into a god here. You're assuming it is intelligent because it appears to be so on the surface. When the AI says "Mm-hmm, gotcha", what did it really "get"? For example, in the Google conversation example, they never show what the result of that call was - did the client get their reservation?

The AI can understand the situation in multiple different ways:
1) the reservation was made at the proposed time
2) the reservation was not made
3) the reservation is not necessary
4) the reservation at the time is not possible
5) the reservation was made at a different time
6) etc...

So what did it report back to the client?

Just because it says "gotcha" according to the script doesn't mean it has actually made the right judgement. AI is "brittle" because it lacks real understanding - it's giving you plausible replies to the conversation, but what's really going on in the background is lights on but nobody home.
TheGhostofOtto1923
not rated yet May 11, 2018
What the AI does here is just replace the intern with a machine that still hasn't got a clue and is just going through a script to get you to the person you should be calling in the first place
Did you listen to that Google demo?

You do seem to have a general disdain for the potential of AI to replace humans. AI can have every clue that a human has. Humans at any level are going through scripts, regurgitating what they know. The difference between us and AI is that we ad lib, we forget, we lie, we often just don't care.

I went to a store a few weeks ago to buy a car part to install myself. The so-called expert kept giving me bad info, telling me 'yeah that's what we always use' and 'yeah that's how we always do it'. Each time I would go online or find an actual installer in the back or read the info on the package and prove him wrong.

AI can have access to all that info, all the time. It wont be just a simple menu system.
Eikka
not rated yet May 11, 2018
I mean, even people do that. Dumb people sometimes become experts at feigning understanding. For example in school when the teacher is trying to help a kid through, let's say a maths problem, the dumb kid can't do the probems so instead they start learning to say "Oh! Aha!" in the right ways at the right moments to trick the teacher into solving the problem for them. The teacher thinks they're getting through, but the kid doesn't learn it - they just repeat the "Oh, aha!" to reward the teacher for the right answers, and then they repeat the answers in the exam by memorizing them.

That puts the kid through the task by making it look like they're completing the exercises, to avoid the punishment like having to take a remedial class, without actual understanding of the topics.

Dumb people can be clever like that - and that's what the AI is doing - feigning understanding while actually just socially engineering your own teachers to -think- that you understand it.
TheGhostofOtto1923
not rated yet May 11, 2018
AI is "brittle" because it lacks real understanding
So what is it that you mean by 'understanding' and what makes you think humans have some faculty that enables them to 'understand' things that a machine never will?

Our 'understanding' is only the sum total of all weve learned. But its tainted by faulty memories and wavering commitment.
it's giving you plausible replies to the conversation, but what's really going on in the background is lights on but nobody home
Yeah humans will often rely on such mindless catchphrases. Is this what you mean by 'understanding'?
Eikka
not rated yet May 11, 2018
Did you listen to that Google demo?


Yes. See the criticism above.

You do seem to have a general disdain for the potential of AI to replace humans. AI can have every clue that a human has.


It can have the clue, but it won't get the point.

See the Chinese Room argument - a script will never be intelligent. Even deep learning "neural" networks reduce to nothing but scripts once they stop training them and "freeze" the network to keep it from forgetting.

That hasn't got anything to do with replacing humans. Of course you can. It's just not going to be very pleasant or effective in the end. It's not going to be singularity "nerd valhalla", but just more of the same old dodging around the limitations of obtuse machines.
TheGhostofOtto1923
not rated yet May 11, 2018
a script will never be intelligent
So what makes you think we don't just follow scripts? Machines can potentially have access to all available scripts.
It's just not going to be very pleasant or effective in the end
What makes you think that being pleasant isn't just a set of scripts?
Eikka
not rated yet May 11, 2018
Our 'understanding' is only the sum total of all weve learned.


Our understanding is more a dynamic combination of what we've learned, and how we integrate new information and make inferences - not just what we know. Understanding is an active process, not simply a script reference of "If A then B".

For example, if the Google Duplex hasn't been trained with the possibility that you don't necessarily have to reserve a table at a restaurant, it will not get through the call with the correct conclusions made. A person can assimilate this new information and form an understanding on the spot, whereas the scripted robot can't.

Yeah humans will often rely on such mindless catchphrases. Is this what you mean by 'understanding'?


Yes, people can act unintelligently. That's no excuse for the robot.
TheGhostofOtto1923
not rated yet May 11, 2018
but just more of the same old dodging around the limitations of obtuse machines
-which can be systematically improved and upgraded, unlike obtuse humans who will never change.
Eikka
not rated yet May 11, 2018
So what makes you think we don't just follow scripts?


Again, see the Chinese Room argument - a script doesn't have the causal power to constitute intelligence or understanding.

Machines can potentially have access to all available scripts.


And? Just adding more code doesn't make a plain script any more intelligent.
TheGhostofOtto1923
not rated yet May 11, 2018
Our understanding is more a dynamic combination of what we've learned, and how we integrate new information and make inferences - not just what we know. Understanding is an active process
'dynamic combination', 'integrate new information', 'make inferences' - you do realize these are scripts don't you? From your repertoire of standard preconceived responses?

A machine will have a much broader library to access than yours or mine. Perhaps this is what really disturbs you.
TheGhostofOtto1923
not rated yet May 11, 2018
Just adding more code doesn't make a plain script any more intelligent
'Intelligent' - what the hell does that mean? Just another script.

HUMANS ARE MACHINES. Get over it.

And they dont work very well. Many mistake this for creativity and think it's somehow an advantage or a virtue.

It isnt.
Eikka
not rated yet May 11, 2018
but just more of the same old dodging around the limitations of obtuse machines
-which can be systematically improved and upgraded, unlike obtuse humans who will never change.


You get diminishing returns.

The scripted "AI" is basically a whole bunch of "IF A THEN B" statements, and improving it by adding more conditional statements is a Sisyphean task because the finer you go into the details, the more optional branches you get.

Google could plausibly get close with petabytes of data, but this approach to "intelligence" is like running a particle physics simulation of a living brain - you're going to need many many orders of magnitude more processing power (and physical energy/power) to approach the accuracy and function of the real deal, and this difficulty ultimately makes your "endless improvement" practically infeasible.

At some point the engineers are just going to call it "good enough", and stop improving it because it takes so much effort.
Eikka
not rated yet May 11, 2018
'dynamic combination', 'integrate new information', 'make inferences' - you do realize these are scripts don't you?


No. That's the point of the Chinese Room argument again. Scripts don't have the causal power to do that. It leads to logical infinite regress - you program the machine to learn a new thing, but then you have to tell the machine how to learn, or how to learn how to learn how to learn... it will never do it on its own - it just can't be scripted in.

'Intelligent' - what the hell does that mean? Just another script.


Intelligence appears to be the emergent property of the configuration of the human brain as it operates and evolves by reciprocal interaction with its environment. It is not a set-in-stone script like a computer program.

One could say the evolution of the neural network IS intelligence as it is continuously happening, unlike the frozen "deep learning" AI, and you can't have "scripted evolution".
TheGhostofOtto1923
not rated yet May 11, 2018
Personality is the sum total of our faults not our strengths. It's the only way we can tell each other apart.

Do machines need personalities? Do we need to program them with faults so we don't feel as intimidated by them?

No.

We just need to get used to the realization that they can and will be much better at everything than we are. Why? Because we design them to be. That's why theyre there.

It's our learning process, not theirs.
TheGhostofOtto1923
not rated yet May 11, 2018
you program the machine to learn a new thing, but then you have to tell the machine how to learn, or how to learn how to learn how to learn...
Uh huh. So how do humans learn? Intuition? Healthy nurturing environment? Gold star on your forehead?

Spirituality? Awe and wonderment? 'Thirst' for knowledge? 'Hunger' to learn? haha
Eikka
not rated yet May 11, 2018
Uh huh. So how do humans learn? Intuition? Healthy nurturing environment? Gold star on your forehead?

Spirituality? Awe and wonderment? 'Thirst' for knowledge? 'Hunger' to learn? haha


When you put a square peg through a round hole, at first it won't go through, but eventually the peg becomes a little more round, and the hole becomes a little more square by the action of slamming the two together. How did the square peg learn to become rounded?

Necessity is the mother of invention, or intelligence in this case. I repeat myself:

One could say the evolution of the neural network IS intelligence as it is continuously happening, unlike the frozen "deep learning" AI, and you can't have "scripted evolution".

TheGhostofOtto1923
not rated yet May 11, 2018
Intelligence appears to be the emergent property of the configuration of the human brain as it operates and evolves by reciprocal interaction with its environment. It is not a set-in-stone script like a computer program
RUBBISH.

So, eikka is a priest now, regurgitating script from holy philo books.

Silly human.
configuration of the human brain
The human unwittingly betrays his true motivations. No, non-human animals can learn, create. Our uniqueness is only an illusion of complexity.

And machines can be far more complex than us. Illusions far more convincing.
TheGhostofOtto1923
not rated yet May 11, 2018
When you put a square peg through a round hole, at first it won't go through, but eventually the peg becomes a little more round, and the hole becomes a little more square by the action of slamming the two together. How did the square peg learn to become rounded?
Script.
Necessity is the mother of invention, or intelligence in this case
Script.
I repeat myself
Script. Try something original. [script]
Eikka
not rated yet May 11, 2018
So, eikka is a priest now, regurgitating script from holy philo books.


If that's so, then why are AI researchers so keen on training their deep learning networks on exactly the same principles?

The difference is, when the AI has been trained and tested, it is frozen in place as a fixed script, thus making it non-intelligent. The intelligence takes place while the AI was in training, and when its operational it is just regurgitating the answers it has found.
Eikka
not rated yet May 11, 2018
Script. Try something original.


Nope. Reality isn't a script. What is happening to your brain and mine isn't according to some determined code. The last 100 years of fundamental physics has proven that much.

You're stuck back in 1800's with some sort of clockwork universe philosophy, which is ironic seeing how you accuse everyone else of "regurgitating script from holy philo books".
TheGhostofOtto1923
not rated yet May 11, 2018
Script. Try something original.


Nope. Reality isn't a script. What is happening to your brain and mine isn't according to some determined code
script
The last 100 years of fundamental physics has proven that much
script
You're stuck back in 1800's with some sort of clockwork universe philosophy
whoa, big script
which is ironic seeing how you accuse everyone else of "regurgitating script from holy philo books"
ad hom script

Keep trying. [Script]

The human who realizes it's a machine
https://youtu.be/6odqRaRV6PM
TheGhostofOtto1923
not rated yet May 11, 2018
Nope. Reality isn't a script. What is happening to your brain and mine isn't according to some determined code
Must be magic then. [Script]

Quantum fluctuations in cranial microtubules. [Penrose script]

BTW that star trek clip was from an episode that was ALL script.
Eikka
not rated yet May 11, 2018
script
script
whoa, big script
ad hom script
Keep trying. [Script]


Must be magic then. [Script]
Quantum fluctuations in cranial microtubules. [Penrose script]


Now you're no longer making any sense or presenting any coherent counter-argument. You're just babbling.
TheGhostofOtto1923
not rated yet May 11, 2018
Yeah I am. Your responses are all preconceived, scripted, predictable. So are mine. They're just more clever which is what prompted your scripted ad hom attack.

I can admit it why can't you?

"I would rather be good than original." I M Pei
Eikka
not rated yet May 11, 2018
Your responses are all preconceived, scripted, predictable. So are mine.

I can admit it why can't you?


That's an admission of faith that the universe is a clockwork mechanism, and accepting this premise, words such as "intelligence" don't really have any meaning. Nothing is intelligent in a deterministic universe, since nobody has a choice anyhow, so your question itself is pointless.

Your premise however has been disproven by fundamental physics over the last 100 years. Indetereminism isn't "magic".

As for intelligence, as stated above, it obviously cannot exist in a determined system. It also cannot obviously exist in a totally random system. It can exists in a mixture of the two, but not all such mixtures are necessarily capable of intelligence.
TheGhostofOtto1923
not rated yet May 11, 2018
Your premise however has been disproven by fundamental physics over the last 100 years. Indetereminism isn't "magic"
I see. So intelligence necessarily includes a random, spontaneous component. Ie irrelevant static. How does irrelevant static make us better problem solvers than machines exactly?

Re my critique of your scripted dialogue above, please indicate one that was original.
TheGhostofOtto1923
not rated yet May 11, 2018
If that's so, then why are AI researchers so keen on training their deep learning networks on exactly the same principles?
Their ranks are full of philos and soft scientists who for instance keep insisting that consciousness is something real instead of a modern stand-in for the soul.

Hence the slow progress.
As for intelligence, as stated above, it obviously cannot exist in a determined system
?? Why not? It is easy to make statements like that about indefinable things like 'intelligence' isn't it?
TheGhostofOtto1923
not rated yet May 11, 2018
Let's check an authority...

"[Intelligence] can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context."

-'Perceive or infer' [not adequately defined]

"Intelligence is most widely studied in humans but has also been observed in both non-human animals and in plants."

- So perhaps they have quantum microtubules on the brain as well?
Eikka
not rated yet May 11, 2018
How does irrelevant static make us better problem solvers than machines exactly?


Non-deterministic computing can be much more energy-efficient and faster, and it can solve, or at least it won't hang up on some classes of problems where deterministic algorithms fail.

Their ranks are full of philos and soft scientists who for instance keep insisting that consciousness is something real


On the contrary. AI research is full of "empirical behaviourists" who make the argument that how the AI works internally is irrelevant as long as it completes the task. This keeps them banging their heads on the wall as they keep finding solutions which almost work.

Like Tesla said of Edison - a little bit of math (theory) would have saved him all the hard work.

?? Why not?


Because is a fully determined system there aren't even any problems to solve, because everything is as it must be, so the very context in which "intelligence" could have a meaning doesn't exist.
Eikka
not rated yet May 11, 2018
You see the problem of AI when you consider that Google's Duplex runs on a distributed system of servers with rows and rows of cabinets full of processors that collectively draw perhaps Megawatts of power.

And it barely does what a human brain achieves with the power of a small lightbulb. Heck, you can have a conversation with a parrot which has a brain the size of a peanut, and it will actually understand some of what you're saying.

Meanwhile the Google computer is still just a fancy chat bot that is trained to reply with a plausible answer based on the probability of what a human would say. It's still Eliza with a bigger database, and a more powerful search engine.

Re my critique of your scripted dialogue above, please indicate one that was original.


I don't understand the criticism, why originality matters in this case?

Obviously, not all of your thoughts or ideas are going to be your own. That would be terribly inefficient and improbable
Eikka
not rated yet May 11, 2018
Speaking of which:

https://en.wikipe...A_effect

The ELIZA effect, in computer science, is the tendency to unconsciously assume computer behaviors are analogous to human behaviors, that is anthropomorphisation.


the ELIZA effect describes any situation[2][3] where, based solely on a system's output, users perceive computer systems as having "intrinsic qualities and abilities which the software controlling the (output) cannot possibly achieve"[4] or "assume that [outputs] reflect a greater causality than they actually do".[5] In both its specific and general forms, the ELIZA effect is notable for occurring even when users of the system are aware of the determinate nature of output produced by the system.


As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."
Eikka
not rated yet May 11, 2018
How does irrelevant static make us better problem solvers than machines exactly?


Further on this point, let's take a neural network.

The network represents knowledge, and can operate in a deterministic fashion. No "quantum tubules" needed. For an input, it produces a well defined output.

But knowledge isn't intelligence. Intelligence happens when the network changes dynamically to respond to demands - it is being continuously shaped. The evolution of the network is a random search through the solution space. Here indeterminacy works to speed up the search and converge the network faster, especially when there's not necessarily one right solution to a question but many.

It's impossible to script evolution - it won't be evolution. You got to throw dice at some point. Like with biological evolution, if there's no mutation no new traits can emerge, or the emergence of new traits must be put down to "God" - which in the case of AI would be the programmer.
Eikka
not rated yet May 11, 2018
I think it was the developer of BEAM robotics, Mark W. Tilden, who commented on the traditional approach to AI as trying to create something from nothing - or more precisely creating chaos out of order, whereas nature goes the other way and creates order out of chaos.

And it makes sense, because if you're trying to program intelligence, you're really starting from nothing. It's much easier to take the buzz and noise around, and filter it down to something sensible, rather than trying to come up with the same information ex nihilo.

So the BEAM bots achieve a lot with very little, since they usually employ something like a chaotic oscillator that is subtly influenced by all the electrical noise it picks up, since there's no signal conditioning, and the robots walk with working gaits and hunt sunny spots on the floor using something like two transistors wired as a flip-flop, for what would traditionally require a micro-controller with hundreds of thousands of transistors.
snoosebaum
not rated yet May 11, 2018
this technology will make only face to face contacts trustworthy . Unless you believe in the AI God '' LOL
TheGhostofOtto1923
not rated yet May 11, 2018
You see the problem of AI when you consider that Google's Duplex runs on a distributed system of servers with rows and rows of cabinets full of processors that collectively draw perhaps Megawatts of power... and it barely does what a human brain achieves with the power of a small lightbulb. Heck, you can have a conversation with a parrot which has a brain the size of a peanut, and it will actually understand some of what you're saying
My smartphone consumes under 5 watts but can outperform early mainframes.

You lack patience and foresight, 2 qualities your AI replacement will excel at.
TheGhostofOtto1923
not rated yet May 11, 2018
Your brain is just a machine. We will soon know exactly how it does what it does. And more importantly we will be able to design and manufacture more durable and more capable replacements that require less space and energy.

Because meatbrains are an obsolete design incapable of doing the things we require it to do. Which is the reason why we are so desperately trying to replace it.

5 years? 50 years? 500 years? 5000 years? Indistinguishable in the scale of a universe. The Singularity will be like a detonation, an implosion.
Eikka
not rated yet May 11, 2018
My smartphone consumes under 5 watts but can outperform early mainframes.


My brain consumes under 25 Watts, and can outperform a million smartphones.

Your brain is just a machine.


Even if it was, not all machine architectures are the same. What you're saying is like picking up a pebble and saying "This is just a rock now, but just you wait..."

The Singularity will be like a detonation, an implosion.


The singularity is a religion.

As I've already pointed out, the AI of today is limited because even as it's trying to emulate the brain by deep learning neural networks, it is not intelligent because the neural networks are then frozen into static algorithms. They're "knowledge machines" that are trained to emulate humans by parroting our behaviour, but not our understanding.

To gain our understanding, they need to change the architecture to a continously learning one, which inherits the same problems such as forgetting.
Eikka
not rated yet May 11, 2018
The problem with singularity is the idea that a machine (or man) can design another machine that is more intelligent than itself.

The fallacy of the idea is that there's no way to know whether a machine exceeds your own intelligence, because your limited understanding prevents you from telling the difference.

You'd have to come up with a test for the machine that exceeds your own ability to answer, therefore you can't check whether the answers are correct or sensible. Testing the machine is only possible in a narrow synthetic sense, such as computing a very hard math problem like a hash function. But even that could be solved by you because you already know how to - it's just practically unfeasible. Making a machine that can remember more chess or Go moves than any human player is not a mark of greater intelligence - just greater memory.

For all the practical "wicked problems" the effort is in vain.

https://en.wikipe..._problem
Eikka
not rated yet May 11, 2018
The problem of the Turing Test and the behavioural argument for AI as pointed out by the inventors of the ELIZA chatbot shows that it's trivial to make a machine that exhausts your ability to test it, well before it reaches parity with your own intelligence, so an AI that is trying to improve itself is bound to come up with machines that are progressively dumber because it merely believes they are more intelligent, and deploys them into action prematurely, and so each successive generation of AI declines.

Biological evolution is able to come by with more intelligent "machines" simply because it makes a billion billion permutations, and most of them fail and die. It's shoving the square peg against the round hole until either the peg becomes round, or the hole becomes square.

TheGhostofOtto1923
not rated yet May 11, 2018
My brain consumes under 25 Watts, and can outperform a million smartphones
Well, certainly outfutz. But then it can use words like 'outperform' without realizing that it is undefined... and not caring much either.

Your amazing contrast will someday (soon) be reversed.
limited understanding prevents you from telling the difference.

You'd have to come up with a test for the machine that exceeds your own ability to answer
Answer questions like 'what is the meaning of knowledge' you mean? The true test of intelligence would be for the machine to ignore nonsense like that.
Biological evolution is able to come by with more intelligent "machines"
Already there are far more substandard, malfunctioning human brains than reasonably competent ones. Fragile, genetically diseased, drug-addicted, irrational, compulsive, violence-prone. Human quality control is abysmal. Hunter-gatherers subject to natural selection are on average far more capable.
TheGhostofOtto1923
not rated yet May 11, 2018
Of course the people should be warned when they don't talk with living person in similar way
And they rushed to comfort the luddites...

"Google: Duplex phone calling AI will identify itself
You'll know if you're talking to AI -- we think."

We ought to require humans to identify themselves in similar fashion. Perhaps a universal competence scale of some sort. Credit report?
My brain consumes under 25 Watts, and can outperform a million smartphones
I suppose we can program AI with outlandish hubris as well but it would only slow them down.

Hey eikka does your brain glow in the dark? My smartphone does.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.