Why a famous technologist is being branded a Luddite

December 28, 2015 by Andrew Maynard, The Conversation
Voicing concerns isn’t the same as smashing the latest technology.

On December 21, the company SpaceX made history by successfully launching a rocket and returning it to a safe landing on Earth. It's also the day that SpaceX founder Elon Musk was nominated for a Luddite Award.

The nomination came as part of a campaign by the Information Technology & Innovation Foundation (ITIF), a leading science and technology policy think tank, to call out the "worst of the year's worst innovation killers."

It's an odd juxtaposition, to say the least.

The Luddite Awards – named after an 18th-century English worker who inspired a backlash against the Industrial Revolution – highlight what ITIF refers to as "egregious cases of neo-Luddism in action."

Musk, of course, is hardly a shrinking violet when it comes to promoting . Whether it's self-driving cars, reusable commercial rockets or the futuristic "hyperloop," he's not known for being a tech party pooper.

So what's the deal?

ITIF, as it turns out, took exception to Musk's concerns over the potential dangers of artificial intelligence (AI) – along with those other well-known "neo-Luddites," Stephen Hawking and Bill Gates.

ITIF is right to highlight the importance of technology innovation as an engine for growth and prosperity. But what it misses by a mile is the importance of innovating responsibly.

Being cautious ≠ smashing the technology

Back in 2002, the European Environment Agency (EEA) published its report Late Lessons from Early Warnings. The report – and its 2013 follow-on publication – catalogs innovations, from PCBs to the use of asbestos, that damaged lives and environments because early warnings of possible harm were either ignored or overlooked.

This is a picture that is all too familiar these days as we grapple with the consequences of unfettered innovation – whether it's climate change, environmental pollution or the health impacts of industrial chemicals.

Things get even more complex, though, with emerging technologies like AI, robotics and the "internet of things." With these and other innovations, it's increasingly unclear what future risks and benefits lie over the horizon – especially when they begin to converge together.

This confluence – the "Fourth Industrial Revolution" as it's being called by some – is generating remarkable opportunities for economic growth. But it's also raising concerns. Klaus Schwab, Founder of the World Economic Forum and an advocate of the new "revolution," writes "the [fourth industrial] revolution could yield greater inequality, particularly in its potential to disrupt labor markets. As automation substitutes for labor across the entire economy, the net displacement of workers by machines might exacerbate the gap between returns to capital and returns to labor."

Schwab is, by any accounting, a technology optimist. Yet he recognizes the social and economic complexities of innovation, and the need to act responsibly if we are to see a societal return on our techno-investment.

Of course every generation has had to grapple with the consequences of innovation. And it's easy to argue that past inventions have led to a better present – especially if you're privileged and well-off. Yet our generation faces unprecedented technology innovation challenges that simply cannot be brushed off by assuming business as normal.

For the first time in human history, for instance, we can design and engineer the stuff around us at the level of the very atoms it's made of. We can redesign and reprogram the DNA at the core of every living organism. We can aspire to creating artificial systems that are a match for human intelligence. And we can connect ideas, people and devices together faster and with more complexity than ever before.

Innovating responsibly

This explosion of technological capabilities offers unparalleled opportunities for fighting disease, improving well-being and eradicating inequalities. But it's also fraught with dangers. And like any complex system, it's likely to look great… right up to the moment it fails.

Because of this, an increasing number of people and organizations are exploring how we as a society can avoid future disasters by innovating responsibly. It's part of the reasoning behind why Arizona State University launched the new School for the Future of Innovation in Society earlier this year, where I teach. And it's the motivation behind Europe's commitment to Responsible Research and Innovation.

Far from being a neo-Luddite movement, people the world over are starting to ask how we can proactively innovate to improve lives, and not simply innovate in the hope that things will work out OK in the end.

This includes some of the world's most august scientific bodies. In December, for instance, the US National Academy of Sciences, the Chinese Academy of Sciences and the UK's Royal Society jointly convened a global summit on human gene editing. At stake was the responsible development and use of techniques that enable the human genome to be redesigned and passed on to future generations.

In a joint statement, the summit organizers recommended "It would be irresponsible to proceed with any clinical use of germline editing unless and until (i) the relevant safety and efficacy issues have been resolved, based on appropriate understanding and balancing of risks, potential benefits, and alternatives, and (ii) there is broad societal consensus about the appropriateness of the proposed application."

Neo-Luddites? Or simply responsible scientists? I'd go for the latter.

If innovation is to serve society's needs, we need to ask tough questions about what the consequences might be, and how we might do things differently to avoid mistakes. And rather than deserving the label "neo-Luddite," Musk and others should be applauded for asking what could go wrong with technology innovation, and thinking about how to avoid it.

That said, if anything, they sometimes don't go far enough. Musk's answer to his AI fears, for instance, was to launch an open AI initiative – in effect accelerating the development of AI in the hopes that the more people are involved, the more responsible it'll be.

It's certainly a novel approach – and one that seriously calls into question ITIF's Luddite label. But it still adheres to the belief that the answer to technology innovation is… more technology innovation.

The bottom line is that innovation that improves the lives and livelihoods of all – not just the privileged – demands a willingness to ask questions, challenge assumptions and work across boundaries to build a better society.

If that's what it means to be a Luddite, count me in!

Explore further: Scientists urge artificial intelligence safety focus

Related Stories

Scientists urge artificial intelligence safety focus

January 12, 2015

The development of artificial intelligence is growing fast and hundreds of the world's leading scientists and entrepreneurs are urging a renewed focus on safety and ethics to prevent dangers to society.

Cornell joins pleas for responsible AI research

August 27, 2015

The phrase "artificial intelligence" saturates Hollywood dramas – from computers taking over spaceships, to sentient robots overpowering humans. Though the real world is perhaps more boring than Hollywood, artificial intelligence ...

Recommended for you

10 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

KelDude
4.4 / 5 (7) Dec 28, 2015
We are rapidly arriving at a situation where "income" must be detached from "labour". Machines will do everything while we stand by and "live". What we do with our lives will have to be addressed and how we move wealth through our societies will have to be determined. The status quo is only creating a huge divide of wealth and poverty with the latter growing at an every increasing pace. I don't have the answer but these questions need to be addressed very soon or as is often said "all hell will break loose". Perhaps ISIL is the thin edge of that wedge. If we don't figure out a way to move forward, I fear for the survival of the human race.
koitsu
4.8 / 5 (6) Dec 28, 2015
Wow, I bet Musk is losing a lot of sleep over that.
SamB
5 / 5 (5) Dec 28, 2015
These losers must be desperate to pick on Mr. Musk. I'll bet none of them can even understand one of the many technological innovations that has come from Mr. Musk over the last 10 years. If Mr. Musk has concerns about AI, then I would pay close attention to his arguments before I would pay any attention to a bunch of headline junkies!
TheGhostofOtto1923
3.7 / 5 (3) Dec 28, 2015
"With these and other innovations, it's increasingly unclear what future risks and benefits lie over the horizon – especially when they begin to converge together"

Tech has always been used to expand our capabilities while minimizing the effects of our limitations.

"All of war is deception." -sun tsu

"All is fair in love and war." -???

Our tropical repro rate made us warfighters. Victorious tribes were obliged to kill enemy males and incorporate their females, thereby accelerating our development tremendously.

Deceivers and tricksters were able to out-reproduce honest, decent folk, giving rise to the vampire within our midst... the psychopath.

These new technologies threaten their existence as never before. AI and IoT will make cheating and deception rare to impossible. Objects and personal wealth will be unstealable. People will be unassaultable. Verified facts will be instantly available to everyone.
cont>
TheGhostofOtto1923
3.7 / 5 (3) Dec 28, 2015
And psychopaths will be exposed to the world.

"The World has only one problem, Psychopaths... The essential feature of Psychopaths is a Pervasive, Obssesive- Compulsive desire to force their delusions on others. Psychopaths completely disregard and violate the Rights of others..."

-Of course they fear this beyond anything else as it means their extinction.
gkam
1 / 5 (3) Dec 28, 2015
Your obsession with psychopathy reminds me of the perverse killer who scrawled on the wall, "Stop me before I kill more!".
0rison
5 / 5 (2) Dec 28, 2015
"Is it okay to be a Luddite?" - Thomas Pynchon - The New York Times
http://www.nytime...ite.html
TheGhostofOtto1923
5 / 5 (2) Dec 30, 2015
Georges obsession with himself reminds me of this
http://www.cassio...path.htm
ForFreeMinds
5 / 5 (1) Dec 30, 2015
AI will continue to improve, as machine learning software improves. Objecting to it, is like those who'd object to robots that weld/assemble cars, because someone might stand where they shouldn't and might get hurt when the robot moves. Like all technology, it's how it's used that makes the difference. The defense industry creating autonomous robots to kill people would get me worried because those robots would have defense mechanisms and be hard to disable.
axemaster
5 / 5 (2) Dec 31, 2015
If innovation is to serve society's needs, we need to ask tough questions about what the consequences might be, and how we might do things differently to avoid mistakes. And rather than deserving the label "neo-Luddite," Musk and others should be applauded for asking what could go wrong with technology innovation, and thinking about how to avoid it.

Exactly.

There's a tendency in science for people to pursue ideas purely out of interest, and without any consideration of the consequences when those ideas leave the lab. There was some debate recently when a research group refused to publish research about a highly dangerous disease - they were afraid the information would be used to create biological weapons. Computer scientists need to start thinking in the same way.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.