AI's current hype and hysteria could set the technology back by decades

AI's current hype and hysteria could set the technology back by decades
AI isn’t as scary as we imagine. Credit: AndreyZH/Shutterstock

Most discussions about artificial intelligence (AI) are characterised by hyperbole and hysteria. Though some of the world's most prominent and successful thinkers regularly forecast that AI will either solve all our problems or destroy us or our society, and the press frequently report on how AI will threaten jobs and raise inequality, there's actually very little evidence to support these ideas. What's more, this could actually end up turning people against AI research, bringing significant progress in the technology to a halt.

The hyperbole around AI largely stems from its promotion by tech-evangelists and self-interested investors. Google CEO Sundar Pichai declared AI to be "probably the most important thing humanity has ever worked on." Given the importance of AI to Google's business model, he would say that.

Some even argue that AI is a solution to humanity's fundamental problems, including death, and that we will eventually merge with machines to become an unstoppable force. The inventor and writer Ray Kurzweil has famously argued this "Singularity" will occur by as soon as 2045.

The hysteria around AI comes from similar sources. The likes of physicist Stephen Hawking and billionaire tech entrepreneur Elon Musk warned that AI poses an existential threat to humanity. If AI doesn't destroy us, the doomsayers argue, then it may at least cause mass unemployment through job automation.

The reality of AI is currently very different, particularly when you look at the threat of automation. Back in 2013, researchers estimated that, in the following ten to 20 years, 47% of jobs in the US could be automated. Six years later, instead of a trend towards mass joblessness, we're in fact seeing US unemployment at a historic low.

Even more job losses have been threatened for the EU. But past evidence indicates otherwise, given that between 1999 and 2010, automation created 1.5m more jobs than it destroyed in Europe.

AI is not even making advanced economies more productive. For example, in the ten years following the financial crisis, labour productivity in the UK grew at its slowest average rate since 1761. Evidence shows that even global superstar firms, including firms who are among the top investors in AI and whose business models depends on it such as Google, Facebook and Amazon, have not become more productive. This contradicts claims that AI will inevitably enhance productivity.

So why are the society-transforming effects of AI not materialising? There are at least four reasons. First, AI diffuses through the economy much more slowly than most people think. This is because most current AI is based on learning from large amounts of data and it is especially difficult for most firms to generate enough data to make the algorithms efficient or simply to afford to hire data analysts. A manifestation of the slow diffusion of AI is the growing use of "pseudo-AI" where a firm appears to use an online AI bot to interact with customers but which is in fact a human operating behind the scenes.

The second reason is AI innovation is getting harder. Machine learning techniques that have driven recent advances may have already produced their most easily reached achievements and now seem to be experiencing diminishing returns. The exponentially increasing power of computer hardware, as described by Moore's Law, may also be coming to an end.

Related to this is the fact that most AI applications just aren't that innovative, with AI mostly used to fine-tune and disrupt existing products rather than introduce radically new products. For example, Carlsberg is investing in AI to help it improve the quality of its beer. But it is still beer. Heka is a US company producing a bed with in-built AI to help people sleep better. But it is still a bed.

Third, the slow growth of consumer demand in most Western countries makes it unprofitable for most businesses to invest in AI. Yet this kind of limit to demand is almost never considered when the impacts of AI are discussed, partly because academic models of how automation will affect the economy are focused on the labour market and/or the supply side of the economy.

Fourth, AI is essentially not really being developed for general application. AI innovation is overwhelmingly in visual systems, ultimately aimed for use in driverless cars. Yet such cars are most notable for their absence from our roads, and technical limits mean they are likely to remain so for a long time.

New thinking needed

Of course, AI's small impact in the recent past doesn't rule out larger impacts in the future. Unexpected progress in AI could still lead to a "robocalypse." But it will have to come from a different kind of AI. What we currently call "AI"—big data and machine learning—is not really intelligent. It is essentially correlation analysis, looking for patterns in data. Machine learning generates predictions, not explanations. In contrast, human brains are storytelling devices generating explanations.

As a result of the hype and hysteria, many governments are scrambling to produce national AI strategies. International organisations are rushing to be seen to take action, holding conferences and publishing flagship reports on the future of work. For example the United Nations University Centre for Policy Research claims that AI is "transforming the geopolitical order" and, even more incredibly, that "a shift in the balance of power between intelligent machines and humans is already visible."

This "unhinged" debate about the current and near-future state of AI threatens both an AI arms race and stifling regulations. This could lead to inappropriate controls and moreover loss of public trust in AI research. It could even hasten another AI-winter—as occurred in the 1980s – in which interest and funding disappear for years or even decades after a period of disappointment. All at a time when the world needs more, not less, technological innovation.


Explore further

Robots to take 20 mn jobs, worsening inequality: study

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: AI's current hype and hysteria could set the technology back by decades (2019, July 24) retrieved 16 September 2019 from https://phys.org/news/2019-07-ai-current-hype-hysteria-technology.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
41 shares

Feedback to editors

User comments

Jul 25, 2019
The only reason investors would turn their back on AI is because it is not making money. This is far from the case in the foreseeable future. And it certainly will not happen in China

Jul 25, 2019
All the hyperbole (positive as well as negative) comes from a fundamental misunderstanding in what AI systems actually are at a technical level.

They are classifiers.

That's.
It.

If you think a classifier will save humanity or destroy it then...meh.

That is not to say that they are not very powerful/universal tools for trying to solve very complex tasks that have escaped a traditional/algorithmic approach. They are fascinating to work on and will be useful in many areas. And as with anything useful it can also be abused (e.g. deep fakes for manipulation of public opinion)

Jul 26, 2019
It's funny how this article seems to be biased. A couple of our of our greatest minds are Stephen hawking, Elon Musk. Both of whom say AI development is very bad for humanity.
I believe it was Elon Musk who likened "AI to a demon in a bottle"
AI is a subject where the study of AI itself could make man extinct.
I think Ripley said it best "Be afraid, be very afraid.".

Jul 26, 2019
People who expect AI to "take over" have no idea how the human mind works.

For starters, an AI can't "want" something.

It's silliness.

Jul 27, 2019
A long time ago there was a war. It was nasty and brutal, and totally invisible to most people. On one side were the time-sharing computer people, where many users could use one computer at the same time, so that the mismatch between human thinking time and computer execution times was reduced. Didn't mean that the computers were smarter, just that when they did their thing, they did it relatively fast. The enemy was not "Big Blue" (IBM) as such, but the culture around mainframes that thought that computers should do their thing independent of users. If you liked and used the report the computer generated for you? Great. If you didn't? You still got that stack of fanfold paper every day.

As often happens technology decided the war. I used the first Digital Equipment computer, the PDP-1, and when I saw the PDP-8 I knew that the war was over and the interactive model had won. This was vitally important to the human race. Today people run computers not the other way around.

Jul 27, 2019
To amplify, it is pretty much impossible today to design and build an android robot that "thinks for itself." The computer chips are not designed that way, the programming languages are not designed that way, the whole internet infrastructure is not designed that way. I can write a computer program that will take hours or days to run. And I have. Although I worked on supercomputer infrastructure like FFT libraries, I also ran some test programs that needed days for me to figure out, for example, the best rounding mode for my code.

But mathematics not gadgets is what keeps the computers working for us. You may have heard of the question P=NP? Shorthand for something which could allow computers to write their own code. The answer isn't known. A lot of people, including me, think that it is false but unproveable. The question of whether it is true for quantum computers is more interesting. But quantum computers will only exist connected to normal computers.

Jul 27, 2019
Today people run computers not the other way around.

I am 72 years old and the most important thing I learned is everything changes(except the rule that everything changes)

Jul 28, 2019
it is pretty much impossible

And your going to bet the whole human race on "pretty much"?
And on top of that the word "impossible" should not even exist.

Jul 28, 2019
You may have heard of the question P=NP? Shorthand for something which could allow computers to write their own code.

AI-powered autocompletion for coders.

Jul 28, 2019
P=NP? Shorthand for something which could allow computers to write their own code.

Erm...no? The one has nothing to do with the other

Jul 28, 2019
P=NP? Shorthand for something which could allow computers to write their own code.

Erm...no? The one has nothing to do with the other
That's what I was wondering, thought maybe as a reference or benchmark for the theoretical limit on the efficacy of the endeavor but in no way prohibiting the attempt...

Jul 28, 2019
I think what most people are forgetting is that current AI operates on minimizing a loss function. This is a static operation. As such an AI is not going to rewrite its own code.

Also there is no 'code' to rewrite in an AI. The code is the implementation of the neural network but the actual AI is the weigh matrix of the neural network - not the neural network code.

Maybe one can undesrtand it like this:
Replacing your own biological body with another biological body (even one that works better) in no way makes you smarter.

Jul 28, 2019
What would Charles Babbage say beyond the grave? What if internet common sense and automation collide? Singularity confrontation and human interest collide?

Jul 28, 2019
clever bot from [url]https://www.cleverbot.com/[/url] said it would be cold.

"I would be cold."

[url]https://www.cleverbot.com/[/url]


Jul 28, 2019
From me: "As a star in the night, you are."

From clever bot: "I would love to go."

Jul 28, 2019
All the hyperbole (positive as well as negative) comes from a fundamental misunderstanding in what AI systems actually are at a technical level.

They are classifiers.


So are we.

Jul 28, 2019
It's funny how this article seems to be biased. A couple of our of our greatest minds are Stephen hawking, Elon Musk. Both of whom say AI development is very bad for humanity.


It will probably not happen anytime soon, but in the long term, AI is the greatest potential threat to humanity. Take any other threats, be they political instability, wars, climate change, peak oil, natural catastrophes, demographic changes... - all are pretty simple to comprehend and to deal with, at least in theory.

Advanced AI is an unknown unknown. Homo sapiens as a species may end up in the same role chimpanzees are right now - struggling to even comprehend what is happening around us, much less have any chance to change the inevitable.

Jul 28, 2019
It will probably not happen anytime soon, but in the long term, AI is the greatest potential threat to humanity.


That will depend on how we utilize AI. Will we make AI become or core caretakers, our empire fixers and such. Or will we make AI be a more assistant based resource. It depends on how much we want to rely and embellish humanistic traits upon AI.

Jul 28, 2019
*our* core caretaker*

Jul 28, 2019
I think what most people are forgetting is that current AI operates on minimizing a loss function. This is a static operation. As such an AI is not going to rewrite its own code.
Hard to see how it could learn if it didn't rewrite something -- and once trained, when it gets to be about 99.99% accurate with new data, there's no longer much room for improvement.

Also there is no 'code' to rewrite in an AI. The code is the implementation of the neural network but the actual AI is the weigh matrix of the neural network - not the neural network code.
The neural network code isn't what gets rewritten though, as you pointed out -- it's the weighting that embodies what's being learned, and that is what gets rewritten with every new batch of training data.

Jul 28, 2019
It seems like a great many of YOU, feel you are smarter than Stephen Hawking, Elon Musk, Google's director of research Peter Norvig, Professor Stuart J. Russell of the University of California Berkeley, Oxford AI ethicist Nick Bostrom, Skype founder Jaan Tallinn and Google AI expert Shane Legg, etc. When they pretty much echo the sentiment "BE AFRAID!".
The list of very smart and famous people who reflect this sentiment goes on and on.

I have never heard of you!

Jul 28, 2019
When they pretty much echo the sentiment "BE AFRAID!"
Technophobia's a pisser, fear is the mind killer.

Jul 28, 2019
Technophobia's a pisser, fear is the mind killer.

Fear is the main reason you are here today, instead of extinct from the predators that ate us for thousands of years. We were smart enough to hide from those predators till we were smart enough and strong enough to defeat them. Never underestimate the value of fear when it comes to making rational decisions.

Jul 28, 2019
Fear is the main reason you are here today, instead of extinct from the predators that ate us for thousands of years
It's the main reason the predators are sucking the life outta our pockets and planet to support a military, i'll give you that much...

Jul 29, 2019
Yep, our military and our government leaders are doing a GREAT job. We as Americans live better than 93% of the rest of the world. That is spectacular!
I do notice some of those 93% whether rich or not have no gratitude to our country and instead are filled with fear based conspiracy theories or pacifist ideas.

Aug 05, 2019
I have just seen a really scary article. Someone wants to make an AI that learns like a child. This is precisely what we do not want. What if it decides we should all die? Or what if it decides it needs us and puts us all on 24x7 surveillance? Does either of those sound like a great idea to anyone?

https://www.edge....lligence

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more