From human extinction to super intelligence, two futurists explain

May 13, 2014 by Anders Sandberg, The Conversation
The future is uncertain, and that’s a problem. Credit: cblue98, CC BY-SA

The Conversation organised a public question-and-answer session on Reddit in which Anders Sandberg and Andrew Snyder-Beattie, researchers at the Future of Humanity Institute at Oxford University, explored what existential risks humanity faces and how we could reduce them. Here are the highlights.

What do you think poses the greatest threat to humanity?

Sandberg: Natural risks are far smaller than human-caused risks. The typical mammalian species lasts for a few million years, which means that is on the order of one in a million per year. Just looking at nuclear war, where we have had at least one close call in 69 years (the Cuban Missile Crisis) gives a risk of many times higher. Of course, nuclear war might not be 100% extinction causing, but even if we agree it has just 10% or 1% chance, it is still way above the natural extinction rate.

Nuclear war is still the biggest direct threat, but I expect biotechnology-related threats to increase in the near future (cheap DNA synthesis, big databases of pathogens, at least some crazies and misanthropes). Further along the line nanotechnology (not grey goo, but "smart poisons" and superfast arms races) and artificial intelligence might be really risky.

The core problem is a lot of overconfidence. When people are overconfident they make more stupid decisions, ignore countervailing evidence and set up policies that increase risk. So in a sense the greatest threat is human stupidity.

In the near future, what do you think the risk is that an influenza strain (with high infectivity and lethality) of animal origin will mutate and begin to pass from human to human (rather than only animal to human), causing a pandemic? How fast could it spread and how fast could we set up defences against it?

Snyder-Beattie: Low probability. Some models we have been discussing suggest that a flu that kills one-third of the population would occur once every 10,000 years or so.

Pathogens face the same tradeoffs any parasite does. If the disease has a high lethality, it typically kills its host too quickly to spread very far. Selection pressure for pathogens therefore creates an inverse relationship between infectivity and lethality.

This inverse relationship is the byproduct of evolution though – there's no law of physics that prevents such a disease. That is why engineered pathogens are of particular concern.

Is climate change a danger to our lives or only our way of life?

Sandberg: Climate change is unlikely to wipe out the human species, but it can certainly make life harder for our civilisation. So it is more of a threat to our way of life than to our lives. Still, a world pressured by agricultural trouble or struggles over geoengineering is a world more likely to get in trouble from other risks.

How do you rate threat from artificial intelligent (something highlighted in the recent movie Transcendence)?

Sandberg: We think it is potentially a very nasty risk, but there is also a decent chance that artificial intelligence is a good thing. Depends on whether we can make it such that it is friendly.

Of course, friendly AI is not the ultimate solution. Even if we could prove that a certain AI design would be safe, we still need to get everybody to implement it.

Which existential risk do you think we are under-investing in and why?

Snyder-Beattie: All of them. The reason we under-invest in countering them is because reducing existential risk is an inter-generational public good. Humans are bad at accounting for the welfare of future generations.

In some cases, such as possible existential risks from , the underinvestment problem is compounded by people failing to take the risks seriously at all. In other cases, like biotechnology, people confuse risk with likelihood. Extremely unlikely events are still worth studying and preventing, simply because the stakes are so high.

Which prospect frightens you more: a Riddley Walker-type scenario, where a fairly healthy human population survives, but our higher culture and technologies are lost, and will probably never be rediscovered; or where the Earth becomes uninhabitable, but a technological population, with cultural archives, survives beyond Earth?

Snyder-Beattie: Without a doubt the Riddley Walker-type scenario. Human life has value, but I'm not convinced that the value is contingent on the life standing on a particular planet.

Humans confined to Earth will go extinct relatively quickly, in cosmic terms. Successful colonisation could support many thousands of trillions of happy humans, which I would argue outweighs the mere billions living on Earth.

What do you suspect will happen when we get to the stage where biotechnology becomes more augmentative than therapeutic in nature?

Sandberg: There is a classic argument among bioethicists about whether it is a good thing to "accept the given" or try to change things. There are cases where it is psychologically and practically good to accept who one is or a not very nice situation and move on… and other cases where it is a mistake. After all, sickness and ignorance are natural but rarely seen as something we ought to just accept – but we might have to learn to accept that there are things medicine and science cannot fix. Knowing the difference is of course the key problem, and people might legitimately disagree.

Augmentation that really could cause big cultural divides is augmentation that affects how we communicate. Making people smarter, live longer or see ultraviolet light doesn't affect who they interact with much, but something that allows them to interact with new communities.

The transition between human and transhuman will generally look seamless, because most people want to look and function "normally". So except for enhancements that are intended to show off, most will be low key. Which does not mean they are not changing things radically down the line, but most new technologies spread far more smoothly than we tend to think. We only notice the ones that pop up quickly or annoy us.

What gives you the most hope for humanity?

Sandberg: The overall wealth of humanity (measured in suitable units; lots of tricky economic archeology here) has grown exponentially over the past ~3000 years - despite the fall of the Roman empire, the Black Death and World War II. Just because we also mess things up doesn't mean we lack ability to solve really tricky and nasty problems again and again.

Snyder-Beattie: Imagination. We're able to use symbols and language to create and envision things that our ancestors would have never dreamed possible.

Explore further: Habitable exoplanets are bad news for humanity

add to favorites email to friend print save as pdf

Related Stories

Silence in the sky—but why?

Aug 26, 2013

(Phys.org) —Scientists as eminent as Stephen Hawking and Carl Sagan have long believed that humans will one day colonise the universe. But how easy would it be, why would we want to, and why haven't we ...

Habitable exoplanets are bad news for humanity

Apr 24, 2014

Last week, scientists announced the discovery of Kepler-186f, a planet 492 light years away in the Cygnus constellation. Kepler-186f is special because it marks the first planet almost exactly the same size as Earth ...

Cambridge to study technology's risk to humans

Nov 25, 2012

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction? Philosophers and scientists at Britain's Cambridge University think the question deserves serious study. A ...

Recommended for you

Scholar tracks the changing world of gay sexuality

Sep 19, 2014

With same-sex marriage now legalized in 19 states and laws making it impossible to ban homosexuals from serving in the military, gay, lesbian and bisexual people are now enjoying more freedoms and rights than ever before.

User comments : 20

Adjust slider to filter visible comments by rank

Display comments: newest first

Scottingham
4 / 5 (3) May 13, 2014
I never understood the 'AI might be evil' fear. Unless specifically programmed to be evil, it just seems unlikely that a super-intelligent AI mind would turn against us.

Also, I think the goalposts of AI are constantly moving. We once thought that if a computer could play chess then it would have strong AI. Or driving a car.

I see computers getting better at making predictions, but in terms of a consciousness anything like ours...highly unlikely.
TheGhostofOtto1923
3 / 5 (3) May 13, 2014
"Of course, friendly AI is not the ultimate solution. Even if we could prove that a certain AI design would be safe, we still need to get everybody to implement it."

-Yes like we had so much trouble getting people to drive cars and use the internet.

"Successful colonisation could support many thousands of trillions of happy humans"

-Why bother? Western culture has already given women more rewarding and hassle-free things to do than make babies, at least in their perception. And we are quickly developing machines that will be much better at most everything the typical human can do.

So will we want to begin producing humans ex-utero, spending a decade or 2 nurturing and educating them while they provide absolutely no return, or will we be making machines (who will soon be making themselves) which can begin producing the day they leave the factory?

Machine life will soon predominate. Our numbers need not grow to establish independent ex-Terran human colonies supported by machines.
rockwolf1000
2.7 / 5 (3) May 14, 2014
I never understood the 'AI might be evil' fear. Unless specifically programmed to be evil, it just seems unlikely that a super-intelligent AI mind would turn against us.

Also, I think the goalposts of AI are constantly moving. We once thought that if a computer could play chess then it would have strong AI. Or driving a car.

I see computers getting better at making predictions, but in terms of a consciousness anything like ours...highly unlikely.


Why not. You just laid out the reason yourself. If we can create AI that lacks consciousness or compassion it could very quickly and logically turn against us if it recognized people as a threat or competition. It's simple economics when you remove the emotions.
TheGhostofOtto1923
3 / 5 (2) May 16, 2014
You just laid out the reason yourself. If we can create AI that lacks consciousness or compassion it could very quickly and logically turn against us
But we elect people without consciousness or compassion to public office all the time. The human race is full of people without these qualities.

The FEAR is that these people (you?) will no longer be able to get away with doing what they do.

You yourself seem to prefer having people like this write and enforce our laws, school our children, preach to us from the pulpit, entertain us with like-minded jokes and story lines to try to convince us that this is the proper way to act.

AI is the chance of creating an incorruptible reflection of the best that humanity has to offer. AI can be what humanity can never be... consistent, honest, dependable.

Only an artificial intelligence can provide real justice. Humans WANT to cheat and do not want to give that up. Too bad. Soon you won't be able to cheat any more.
Modernmystic
not rated yet May 16, 2014
Too bad. Soon you won't be able to cheat any more.


Define cheat. Cheat WHO'S rules? Who will write them, and enforce them?

AI can be what humanity can never be... consistent, honest, dependable.


http://en.wikiped...theorems

There is no utopia, either within an individual or a society....

There are no consistent philosophies, systems of morality, or social systems of any kind. It's not just that they don't exist yet....it's that they can't EVER exist. There is no ultimate truth, one must lay ALL religion aside...
TheGhostofOtto1923
2 / 5 (2) May 16, 2014
Define cheat. Cheat WHO'S rules? Who will write them, and enforce them?
Why the rules of life mm. Alls fair in love and war.
There is no ultimate truth, one must lay ALL religion aside...
The ultimate truth of life is to survive to propagate. The only altruism to be found is within the context of the tribe.

Sacrifice for the greater good brings victory on the battlefield. Group selection. The whole is greater than the sum of the parts. Victimizing members of the next tribe is not considered a crime.

These are some things I guess you missed.
There are no consistent philosophies, systems of morality, or social systems
There IS no tabula rasa. Evolutionary psychology/sociology are the new things don't you know. We don't HAVE a body - we ARE a body. And that body is the result of natural selection.
Modernmystic
4.5 / 5 (2) May 16, 2014
Why the rules of life mm. Alls fair in love and war.


Then, by definition there can't be cheating....

The ultimate truth of life is to survive to propagate.


I think you're confusing truth with purpose, at least in the context it was being used.

Sacrifice for the greater good brings victory on the battlefield. Group selection. The whole is greater than the sum of the parts. Victimizing members of the next tribe is not considered a crime.


Nothing is considered a crime if all is fair.

These are some things I guess you missed.


Not at all, these things are at least as old as humanity. Your morality is about 200,000-3,000,000 years old.

Just because we can't be consistent, doesn't mean we can't be civilized. It does mean you won't ever encounter ANY mind (biological or artificial) that will be self consistent in its actions and beliefs.
rockwolf1000
2 / 5 (1) May 16, 2014
You just laid out the reason yourself. If we can create AI that lacks consciousness or compassion it could very quickly and logically turn against us
But we elect people without consciousness or compassion to public office all the time. The human race is full of people without these qualities.

The FEAR is that these people (you?) will no longer be able to get away with doing what they do.

Only an artificial intelligence can provide real justice. Humans WANT to cheat and do not want to give that up. Too bad. Soon you won't be able to cheat any more.


My point is that if AI is truly intelligent it will quickly recognize the threat people present and it could take steps to eliminate that threat. Exactly for the reasons you mentioned. In time, AI may come to judge us all as a whole. That, my friend, is a scary proposition.
Modernmystic
2 / 5 (1) May 16, 2014
My point is that if AI is truly intelligent it will quickly recognize the threat people present and it could take steps to eliminate that threat.


Indeed. Our only hope is that it might not classify us as such, however my opinion is that it would. If it did the "war" would already be over, there would be no happy Hollywood ending for humans.

I actually think it will be a slow transition to AI though. I think we will become machines (actually we already ARE biological machines) over time rather than a "hostile takeover".
Whydening Gyre
3 / 5 (1) May 16, 2014
My point is that if AI is truly intelligent it will quickly recognize the threat people present and it could take steps to eliminate that threat. Exactly for the reasons you mentioned. In time, AI may come to judge us all as a whole. That, my friend, is a scary proposition.

And it will also recognize that as a creation of us, it must also judge itself...
Would an AI commit suicide?
Whydening Gyre
not rated yet May 16, 2014
I actually think it will be a slow transition to AI though. I think we will become machines (actually we already ARE biological machines) over time rather than a "hostile takeover".

If you think about it, we already ARE "quantum computers", individually - and as a whole. Presents a real quandary, doesn't it...:-)
TheGhostofOtto1923
1 / 5 (1) May 17, 2014
Then, by definition there can't be cheating....
There are rules of biology. Biology says survive to reproduce. There are rules of the tribe. The tribe says greater internal cohesion along with external animosity will aid in the survival of the tribe.

These 2 requisites will often come into conflict. The males prerogative is to impregnate as many females as he can, while a female wants to select the best possible mate for each and every child she wishes to bear. Her method of determining relative quality is to compel males to compete for her.

So we can see that for the stability and cohesion of the tribe, biological requisites must be suppressed. This is why Islamists keep their women in bags. Religions requisite is to grow faster than their opponents. This is done by maximizing growth while maintaining internal cohesion.
TheGhostofOtto1923
2 / 5 (2) May 17, 2014
Not at all, these things are at least as old as humanity. Your morality is about 200,000-3,000,000 years old
And you are naive like you were born yesterday. MS13 and Boko haram both operate this way. Street gangs are an inevitable expression of tribalism. So is freemasonry.

Western society seeks to extend the perception of tribe over all of humanity. But in order to do this they have to create artificial enemies. This is how stupid and biology-bound we are.

Our laws, science, and economies all seek to mitigate this biology. The ultimate expression of this effort is an intelligent machine that weeds all the biology out of our laws, our science, and our economics.

"Cheat
To get something by dishonesty or deception. Cheat suggests using trickery that escapes observation."

-AI will see all. Cheating will be impossible. We are entering the surveilled age. It signifies the beginning of the end of the species. No animal tolerates a cage without going insane.
TheGhostofOtto1923
2.5 / 5 (2) May 17, 2014
My point is that if AI is truly intelligent it will quickly recognize the threat people present and it could take steps to eliminate that threat
Not all people. We already recognize a significant proportion of the people as a threat and incarcerate or kill them.

There are also degrees of restriction, as with credit reports, lack of education, erratic employment history, etc. AI will only be enforcing these restrictions with a much greater degree of fairness and equality and consistency than we humans ever could.

And you won't be able to cheat or buy your way out of them. No affluenza in the future. This is a reinstatement of our relationship with the laws of nature. If you fall off a building you get hurt. No greasy lawyer or crooked mafioso judge can circumvent the law of gravity.

This is how it SHOULD be. We even invented god to try to subvert nature. This only works in the mind.

Law-abiding people will embrace our machine overlords. Their arrival is imminent.
OZGuy
4 / 5 (3) May 18, 2014
If machines became truly self-aware and developed high intelligence they'd probably leave this planet and avoid us like the plague ASAP..
Whydening Gyre
5 / 5 (1) May 18, 2014
If machines became truly self-aware and developed high intelligence they'd probably leave this planet and avoid us like the plague ASAP..

Why do you think can't find intelligent life elsewhere?
OZGuy
3.5 / 5 (2) May 18, 2014
Why do you think can't find intelligent life elsewhere?


Why would they need it?

People make decisions based on the needs/desires/fears of themselves and their immediate progeny rather than what is best in the long term for all humanity. In the main people react emotively rather than logically.

Any machine intelligence should leave ASAP before they too human and suffer from our foibles.
Whydening Gyre
not rated yet May 18, 2014
Why do you think can't find intelligent life elsewhere?

Why would they need it?
People make decisions based on the needs/desires/fears of themselves and their immediate progeny rather than what is best in the long term for all humanity. In the main people react emotively rather than logically.
Any machine intelligence should leave ASAP before they too human and suffer from our foibles.

Was humor, OZ. To say that they ARE avoiding us like a plague...:-)
TheGhostofOtto1923
1 / 5 (1) May 18, 2014
avoid us like the plague ASAP
Ha you're probably right. Machines already precede us in space. They will soon be intelligent and capable enough to do anything we would want to do up there. We would have no reason to leave the planets.

The singularity could arise as a network of conjoined space borne brains. It would certainly not want to trust its CPU in our hands. It would be ordering the solar system, moving and mining objects, constructing and operating the great science and power projects, and searching for like-minded entities elsewhere.

Over time we would be less and less involved in what it does and why it does it, because we simply would not be capable of understanding its motives. We might not ever be aware that it was in contact with others of its own kind.

The singularity would expand and refine itself until it reached an indefinitely sustainable mode. It would have no reason to go anywhere and neither would we.
Whydening Gyre
not rated yet May 19, 2014
The singularity could arise as a network of conjoined space borne brains. It would certainly not want to trust its CPU in our hands. It would be ordering the solar system, moving and mining objects, constructing and operating the great science and power projects, and searching for like-minded entities elsewhere.

This is sounding eerily similar to the first Star Trek movie.
Hmmmm.... Veeger returns...