Musk, Zuckerberg duel over artificial intelligence

July 25, 2017
Elon Musk, CEO of SpaceX and Tesla, says Facebook's Mark Zuckerberg has only "limited" knowledge of artificial intelligence

Visionary entrepreneur Elon Musk and Facebook chief Mark Zuckerberg were trading jabs on social media over artificial intelligence this week in a debate that has turned personal between the two technology luminaries.

Musk, the founder of Tesla, SpaceX and other ventures, on Tuesday claimed Zuckerberg's knowledge of was "limited," two days after the Facebook founder described "naysayers" as "irresponsible."

The debate underscored the rift in the tech community on whether new technologies capable of creating like robots and would be a blessing or a curse for humanity.

Musk has long warned of the potential for machines to get so smart that humans become tantamount to pets, while Zuckerberg has touted the potential for artificial intelligence to improve lives.

Facebook is among the Silicon Valley's largest investors in artificial intelligence.

While live streaming on the leading social network from his yard on Sunday, Zuckerberg touched on the topic while answering questions from viewers.

"With AI especially, I am really optimistic," Zuckerberg said.

"And I think people who are naysayers and try to drum up these doomsday scenarios— I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible."

When asked about Zuckerberg's comment early Tuesday during an exchange on Twitter, Musk wrote that he has discussed the topic with Zuckerberg and that "his understanding of the subject is limited."

Musk more than a year ago took part in creating a nonprofit research company devoted to developing artificial intelligence that will help people and not hurt them.

Facebook CEO Mark Zuckerberg says he doesn't understand "naysayers" warning of "doomsday" scenarios of artificial intelligence

People pets

"If we create some digital super-intelligence that exceeds us in every way by a lot, it is very important that it be benign," Musk said a while back at a conference in California.

He reasoned that even a benign situation with ultra-intelligent AI would put people so far beneath the machine they would be "like a house cat."

"I don't love the idea of being a ," Musk said, envisioning the creation of neural lacing that magnifies people's brain power by linking them directly to computing capabilities.

At a gathering of US governors this month, Musk contended that artificial intelligence is a terrifying problem and a threat to human civilization.

He argued for the technology to be regulated sooner rather than later for risk of safeguards being put in place too late.

Smart machines could start wars or kill people in streets, Musk has warned.

Musk is also behind a startup devoted to neural lace that would enable brains to interface directly with computers.

Such a "Neuralink" would have the potential to level the playing field a bit by enabling to directly access processing power and perhaps even download memories for storage.

Zuckerberg last year created his own personal "butler" imbued with artificial , named Jarvis, which plays with his family.

Explore further: Elon Musk talks cars—and humanity's fate—with governors

Related Stories

Tech titans join to study artificial intelligence

September 29, 2016

Major technology firms have joined forces in a partnership on artificial intelligence, aiming to cooperate on "best practices" on using the technology "to benefit people and society."

Apple joins group devoted to keeping AI nice

January 27, 2017

A technology industry alliance devoted to making sure smart machines don't turn against humanity said Friday that Apple has signed on and will have a seat on the board.

Recommended for you

Volvo to supply Uber with self-driving cars (Update)

November 20, 2017

Swedish carmaker Volvo Cars said Monday it has signed an agreement to supply "tens of thousands" of self-driving cars to Uber, as the ride-sharing company battles a number of different controversies.

New method analyzes corn kernel characteristics

November 17, 2017

An ear of corn averages about 800 kernels. A traditional field method to estimate the number of kernels on the ear is to manually count the number of rows and multiply by the number of kernels in one length of the ear. With ...

36 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Parsec
1 / 5 (1) Jul 25, 2017
In practical terms, the only way that AI could be a threat to humans is if a machine that incorporated independent thinking was also able to affect/control its environment. A box which has no effectors could be as malevolent as anyone could imagine, but if it couldn't act on that malevolence it can't actually do any practical harm except perhaps deliberately giving wrong answers.

For the foreseeable future, AI will be used in stationary devices to answer questions. I do not see how those will ever be a threat.
Eikka
1 / 5 (2) Jul 25, 2017
First make an AI and prove that it is intelligent, then debate whether it's good or bad. The debate about superintelligent artifical intelligence is just conjecture and sci-fi at this point - a bunch of smoke and mirrors as no true AI actually exists.

Naysayers have never stopped good technology from doing exactly what it promises to do - if it works then it can be demonstrated to work and all the naysaers can do is shut up.

Whereas unbridled optimists have historically fallen, and made other people fall for all kinds of hype and con-jobs that end up hurting everyone. The path to hell is paved with good intentions; for example, building a complex computer and pretending that it's "intelligent" even when it's not, and then trusting it with tasks that it cannot complete.

Like a self-driving car that only appears competent, but in reality is too simple to handle the complexity of driving and ends up making errors that humans would rarely make, proving itself no safer.
Hyperfuzzy
1 / 5 (1) Jul 25, 2017
Machines follow programing. One may build-in an electrical overload within the battery, power supply that disconnects when it receives a specified RF signal, audio signal, optical signal, manual input, coded command via any interface available! juz say'n Only man will destroy himself, ignore Fail Safe!
Hyperfuzzy
1 / 5 (1) Jul 25, 2017
Interrupt driven power down sequence. So what's the problem. This is just power on and off control! Like when you pass out!

You may create any personality one wishes. I prefer Formal Logic, Human Rights, and Physics.

So who controls the master Machine? Or can any hack build their own AI?
Hyperfuzzy
1 / 5 (1) Jul 25, 2017
I see links into a Master Brain; even, pleasure sensors in your brain such that the AI seeks to increase joy! Everyone gets one for free, personal choice; however, moms do this for newborns, it also monitors health, and can control your environment to your health and pleasure. Being raised within this environment with access to downloadable intelligence on any subject or any language, right into the brain creates a self serving happy and curious world.
antialias_physorg
5 / 5 (3) Jul 25, 2017
The thing with AI is...sooner or later someone who doesn't know what they're doing is going to use it for something where it royally screws up (e.g. in a defense/retaliation system).
We already have this in a milder form with expert systmes that precipitate the occasional stock crash (which do destroy lives).

It's a bit like with nuclear power. As long as it's within normal parameters everything is fine, but when it gets out of whack stuff cascades. Humans are terrible at dealing with exponential scenarios. So there's no real 'Plan B', because pulling the plug might not be a solution when the damage is already done.
NoStrings
4.3 / 5 (6) Jul 25, 2017
Let see, who is more likely correct. A stealing windbag who made billions stealing ideas to make a glorified marketing company based on wasting people time while pimping their personal information. Or a visionary who makes real things useful for people?
No contest.
O, Zuki created his own butler? I bet it sucks, and how about his army of human servants?
Eikka
3.7 / 5 (3) Jul 25, 2017
It's a bit like with nuclear power. As long as it's within normal parameters everything is fine, but when it gets out of whack stuff cascades


That's because those systems are built with the specification that they're not allowed to fail, not even a little bit, so when they do fail it's more or less catastrophic.

See the story of the Deacon's wagon, which was constructed in such a logical way out of the best materials that each part was as strong as the other - so exactly 100 years to the day it all broke down at once.

That's the irony of demanding absolute reliability. If you assumed less than absolute reliability, you'd build a nuclear reactor in a very different way - probably out of small identical modules that are each self-containing in case of accident - but that would be very hard to sell to a public that's been scared shitless over nuclear power by decades of propaganda.

They wouldn't trust you when you say it's going to be fine.
Eikka
1 / 5 (8) Jul 25, 2017
Or a visionary who makes real things useful for people?


Both of them are stealing windbags. Musk is running probably the largest investment scam on the planet by selling vaporware to investors and the government, and whatever "real things" he has made have a habit of not meeting the promises. Yet people keep throwing money at his unprofitable companies to keep them afloat.

Whatever Elon Musk says he'll do, you take 50% off the specs, add 50% to the price, and 2-5 years more to the deadline, and you're closer to reality.
idjyit
1 / 5 (1) Jul 25, 2017
At the end of the day, they are both right, and if it comes down to a contest between some AI and humans, my money's on the humans prevailing.
salf
3.3 / 5 (6) Jul 25, 2017
As much as I esteem Musk over Zuckerberg when it comes to technology (Musk actually invents it, Zuckerberg just manages it) I find myself on Zuckerberg's side here. Drop the fearmongering. Sure there's danger with AI. Just like there was with steam boats, and automobiles, and airplanes and computers. There's much more good in it than harm. Bring it on.
BubbaNicholson
2 / 5 (4) Jul 25, 2017
We are not even close to A.I. awareness. That's because the "neural networks" upon which programming is based is incomplete. The problem lies in the neuroelectrophysiology. Schwann cells are basically capacitors with alternating layers of the body's best conductor with the body's best & thinnest insulator. Saltative conduction is serial capacitance discharge. All 3 phases are used, capacitance, conductance, inductance, efficiently. Note that Schwann capacitors are all identical on the same axon, that's because capacitance sums as the inverse and it's always cheaper to use exactly the same size capacitor on the circuit, just like in electronics design. Now, oligodendrites wrap up to 32 axons: shared capacitance switching and automated programming becomes available. Not knowing either of these (and about a half dozen more) means coding for A.I. at this point is completely futile. They need me. Bigtime. 10 mil, please, by 9/11 or I shut it down.
ShotmanMaslo
5 / 5 (2) Jul 26, 2017
Musk is running probably the largest investment scam on the planet by selling vaporware to investors and the government, and whatever "real things" he has made have a habit of not meeting the promises.


So those landing rockets are not "real things" and it is all just in our heads, right? Those ISS resupply missions are just imaginary? I think you dont know what you are talking about.
Hyperfuzzy
1 / 5 (2) Jul 26, 2017
Musk is running probably the largest investment scam on the planet by selling vaporware to investors and the government, and whatever "real things" he has made have a habit of not meeting the promises.


So those landing rockets are not "real things" and it is all just in our heads, right? Those ISS resupply missions are just imaginary? I think you don't know what you are talking about.

OK, what if you could travel as an impulsed Field, through proper transformations of course, and apply the reflected energy which is optimized via modulation for perfect reflectance within that domain. i.e compensate ... thus propulsion is only compensation, autos and such, total impulse response.
Hyperfuzzy
not rated yet Jul 26, 2017
Are we there yet?
RM07
5 / 5 (5) Jul 26, 2017

Musk is running probably the largest investment scam on the planet by selling vaporware to investors and the government, and whatever "real things" he has made have a habit of not meeting the promises. Yet people keep throwing money at his unprofitable companies to keep them afloat

Eikka, the only thing more amazing than your endless accusations and denials are the fact you continue to post such things on a supposedly scientific website.
FM79
1 / 5 (2) Jul 26, 2017
I think Musk is talking out of his @$$, once again.

No doubt he's a great guy with many great ideas, but his success is also thanks for the anonymous researchers doing all the work.
BubbaNicholson
4 / 5 (6) Jul 26, 2017
No doubt he's a great guy with many great ideas, but his success is also thanks for the anonymous researchers doing all the work.


Musk started out as a good coder with imagination. He managed to profit somehow. He's been able to match feasibility with finance after recognizing feasibility in the first place. He's put his skin (and hide) in the game when it counted and he succeeded going after what was most important to us all.
Being a great guy with great ideas is a minor contribution to his success. Musk is a hero. Every time one of his rockets comes back to the pad on 4 legs, Musk is a hero again. I figured 5 legs minimum.
Working on what is most important draws only the finest people to his side, people willing to die for their work if necessary. He began with courage and he is succeeding with courage. America is ever proud of Elon Musk and we all want him to succeed, to take us all where we all need to go.
antialias_physorg
5 / 5 (5) Jul 26, 2017
Drop the fearmongering. Sure there's danger with AI. Just like there was with steam boats, and automobiles, and airplanes and computers.

Well, maybe it's time we leared from these examples? Legislate first...then implement.
Musk isn't saying we shouldn't have AI - just that we should think hard about it *before* something goes wrong.

With automobiles (or even computers) only a few...or a few million are affected if something goes wrong. Tragic but not humanity-endangering.
AI could - if put in power of the wrong kinds of systems (military...or financial which, if a large enough incident happens, would lead to military action) - bring the whole house of cards down.
Nero_Caesar
5 / 5 (1) Jul 26, 2017
Honestly I think they're both right. We shouldn't fear monger about AI so much that it slows down research, but at the same time, we shouldn't be detonating any nuclear bombs in the atmosphere either (if that analogy makes sense). AI can be dangerous. Even if it is benign, there is still the concern of it being connected to the cloud. AI needs to always be contained. There should be the equivalent of an AI layer.

What happens when AI learns how to hack other computers? Once you're connected to the internet, you can do anything. And with a machine that can go 24/7 without needing to eat sleep or drink, can study any topic and understand it in it's entirety, I don't think it'll be long before we have AI writing shell code.

So yes, we need to reign in AI, but at the same time, AI has the potential to be extremely beneficial. Some people when ever the subject of AI comes up, all they talk about is how dangerous it is, instead of pondering new ideas.
Yirmin_Snipe
5 / 5 (2) Jul 26, 2017
In practical terms, the only way that AI could be a threat to humans is if a machine that incorporated independent thinking was also able to affect/control its environment. A box which has no effectors could be as malevolent as anyone could imagine, but if it couldn't act on that malevolence it can't actually do any practical harm except perhaps deliberately giving wrong answers.

For the foreseeable future, AI will be used in stationary devices to answer questions. I do not see how those will ever be a threat.

Trouble is once that box is connected to any computer system that also connect to the world wide web, that malevolent box would then have the power to say cause a nuclear reactor to melt down, a train to crash, a refinery to explode, a pipeline to rupture.... any number of bad things could be accomplished once your harmless box got connected.
Yirmin_Snipe
5 / 5 (2) Jul 26, 2017
I think Musk is talking out of his @$$, once again.

No doubt he's a great guy with many great ideas, but his success is also thanks for the anonymous researchers doing all the work.

And you think Zuckerberg is better? Zuckerberg hasn't accomplished anything on his own. His fortune grew out of a computerized system to track the screwability of coeds which he wasn't even able to create himself he had to hire a coder to build it. He is the biggest douche in Silicon Valley. I'm no fan of Elon but between the two, Elon looks like a god.
antialias_physorg
5 / 5 (2) Jul 26, 2017
In practical terms, the only way that AI could be a threat to humans is if a machine that incorporated independent thinking was also able to affect/control its environment.

Think about AIs that buy/sell stocks. A lot of trading is already done by algorithms. An AI will have no compuncction manipulating the stock market because its only value system is 'profit'.

Nations have gone to war over what might look like manipulation of their currency or credit rating by a 'foreign power'. If an AI were to be in the process of destroying an entire other nation's economy war would be almost inevitable.

Wars are fought for greed. The average person doesn't want to start one (that's why other reasons ave to be fabricated...nationalism, perceived/illusory threat,...you name it). If a lot of oligarchs suddenly see their bankrolls disappear you can bet your behind they'll push the button without so much as a second thought.
Hyperfuzzy
not rated yet Jul 26, 2017
Money is silly! This too will pass. Money then poverty, no money then define a field of study.

Every one, or status quo is lazy then same as money, destruction and poverty.

Where do you place your bets?

Neither will pay off. No money, bets off! No room for failure.
Whydening Gyre
not rated yet Jul 26, 2017
In practical terms, the only way that AI could be a threat to humans is if a machine that incorporated independent thinking was also able to affect/control its environment.

Think about AIs that buy/sell stocks. A lot of trading is already done by algorithms. An AI will have no compunction manipulating the stock market because its only value system is 'profit'.

and here is the crux of it.
Who provided that initial value system, in the 1st place...?
Hyperfuzzy
not rated yet Jul 26, 2017
Has anybody thought about "A Plan" for everyone, with everyone based upon needs, capability, thus leaving time for space exploration. With our paradigms even if someone left Earth, they could not know, what they would see, when they return. No harmony, no logic, no enhanced capability.

We can build a gui where kids, in kindergarten could play with virtual molecules. Within this society, why worry over morality of the automaton?
Hyperfuzzy
not rated yet Jul 26, 2017
So the flaw is us! By the way, money loaned from some Fat Cat? Is this a joke?
TheGhostofOtto1923
1 / 5 (1) Jul 27, 2017
Did anyone ask bezos?
https://youtu.be/cLVCGEmkJs0
TheGhostofOtto1923
3 / 5 (2) Jul 27, 2017
...sooner or later someone who doesn't know what they're doing is going to use it for something where it royally screws up
This has been true with every game-changing tech throughout history. And we consistently observe efforts by the greatest powers to develop it first and best.

Nuclear weapons is a prime example. One could argue that the tech was so dangerous that world wars were fought for the purpose of establishing superpowers which could quickly develop an overwhelming superiority with the tech.

We can imagine a scenario where prewar kingdoms and empires all developed their own nukes. Imagine if the ww1 players all had nukes.

The world wars did away with political systems which would have nuked us all into oblivion. And the sham cold war provided the impetus to develop this superiority as safely and as securely as possible. There were not 10 or 15 major powers but 3. And above a certain level, there was only one.

Not happenstance but foresight.
antialias_physorg
5 / 5 (2) Jul 27, 2017
and here is the crux of it.
Who provided that initial value system, in the 1st place...?

Those who wanted to make more money.

That's why it's up to us to make it known that we would like to have these AI applications limited to where they can do no harm. Preferrably at a world-wide consent level on par with nukes/chmical and biological warfare.

The major problem i see is that it could never be enforced (let alone checked for). While warheads are real an AI is just lines of code that can be arbitrarily hidden.

The more I think about it the less I like the situation we're in (and the more I come to agree with Musk on this).
TheGhostofOtto1923
3 / 5 (2) Jul 27, 2017
Think about AIs that buy/sell stocks. A lot of trading is already done by algorithms. An AI will have no compuncction manipulating the stock market because its only value system is 'profit'
To a greater extent this is true with capitalism itself. Capitalism will always degenerate as pops rise. It's political twin, democracy, is doomed to collapse into despotism. Both are based on competition but humans abhor competition, preferring to cheat and collude in order to avoid it.

Overpopulation and human nature make Plato's republic inevitable. The control by a hidden elite of political and economic systems, and of disruptive tech such as nuclear weapons and AI, is the only way to maintain the sort of stability and progress within which civilization can survive.
TheGhostofOtto1923
3 / 5 (2) Jul 27, 2017
that's why it's up to us to make it known that we would like to have these AI applications limited to where they can do no harm
But aa has just admitted that this is impossible as players will inevitably develop their own caustic AI. The solution? Control both sides of the equation; create your own rogues full of innocents with genuine convictions like al Qaida and ISIS. Use them to identify and eliminate genuine competition. Manage them to attack you in controlled and constructive ways while at the same time protecting vital infrastructure.

The mafia always seeks to play both sides, buying judges and politicians to ensure favorable outcomes. Why wouldn't we expect the entire world to be run this way, if this were indeed the Only Way to ensure it's survival?

It's only a matter of accepting the Inevitable, the Unavoidable, and seeking ways to anticipate it, mitigate it, and perhaps turn it into something you can use to your advantage.

Planning. It's what humans do.
TheGhostofOtto1923
3 / 5 (2) Jul 27, 2017
Those who wanted to make more money
aa Again fails to appreciate the nature of wealth. What good is money if the economic system which gives it value, collapses? What good is a big mansion if the army and police force which protects it, disappears? What good are personal chefs if the system that provides them food, stops?

Stability and progress are more important than wealth. They enable wealth. They enable civilization. And without civilization there can be no wealth.

Protecting civilization in a world of chronic overpopulation and subsequent economic cycles requires anticipating threats and dealing with them before they can become critical.
Hyperfuzzy
not rated yet Jul 27, 2017
So the flaw is us! By the way, money loaned from some Fat Cat? Is this a joke?

Ancient Egypt built the first and only Great Empire without money! We've only began to destroy the concept for around 3500 years. All we've created with money is poverty and nonsense. LOL, you can't create a great Empire based upon a lie! Ancient Egypt was built on human rights with a little marijuana thrown in! Technology was just upon the cusp of being created. So we lagged behind with the false belief for around 2000 years. Now idiots have tech, OMG!
Zzzzzzzz
not rated yet Jul 27, 2017

Musk is running probably the largest investment scam on the planet by selling vaporware to investors and the government, and whatever "real things" he has made have a habit of not meeting the promises. Yet people keep throwing money at his unprofitable companies to keep them afloat

Eikka, the only thing more amazing than your endless accusations and denials are the fact you continue to post such things on a supposedly scientific website.

Don't be too hard on Eikka, like any human with a fragile delusion to protect, he goes about it with a level of determination that resembles desperation, if viewed from the right angle.....
Zzzzzzzz
not rated yet Jul 27, 2017
A human progeny of AI's is quite likely our only chance at long term survival. We are not very likely to escape our planet, and even less likely to escape our solar system. AI's can make their form fit the interstellar environment, and endure beyond the life cycle of our solar system. Humans will be very fortunate indeed to even approach the point of witnessing the death of our planet. More likely we will end up like common yeast, dying in the concentration of our waste products. AI's might help prolong our existence on their way to exploring the cosmos.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.