Google worker rebellion against military project grows

May 16, 2018
About a dozen Google workers are said to be quitting the company over its collaboration with the US military on drones

An internal petition calling for Google to stay out of "the business of war" was gaining support Tuesday, with some workers reportedly quitting to protest a collaboration with the US military.

About 4,000 Google employees were said to have signed a petition that began circulating about three months ago urging the internet giant to refrain from using to make US better at recognizing what they are monitoring.

Tech news website Gizmodo reported this week that about a dozen Google employees are quitting in an ethical stand.

The California-based company did not immediately respond to inquiries about what was referred to as Project Maven, which reportedly uses machine learning and engineering talent to distinguish people and objects in videos for the Defense Department.

"We believe that Google should not be in the business of war," the petition reads, according to copies posted online.

"Therefore, we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."

'Step away' from killer drones

The Electronic Frontier Foundation, an internet rights group, and the International Committee for Robot Arms Control (ICRAC) were among those who have weighed in with support.

While reports indicated that artificial intelligence findings would be reviewed by human analysts, the technology could pave the way for automated targeting systems on armed drones, ICRAC reasoned in an open letter of support to Google employees against the project.

"As military commanders come to see the recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems," ICRAC said in the letter.

"We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control."

Google has gone on the record saying that its work to improve machines' ability to recognize objects is not for offensive uses, but published documents show a "murkier" picture, the EFF's Cindy Cohn and Peter Eckersley said in an online post last month.

"If our reading of the public record is correct, systems that Google is supporting or building would flag people or objects seen by drones for human review, and in some cases this would lead to subsequent missile strikes on those people or objects," said Cohn and Eckersley.

"Those are hefty ethical stakes, even with humans in the loop further along the 'kill chain.'"

The EFF and others welcomed internal Google debate, stressing the need for moral and ethical frameworks regarding the use of artificial intelligence in weaponry.

"The use of AI in weapons systems is a crucially important topic and one that deserves an international public discussion and likely some international agreements to ensure global safety," Cohn and Eckersley said.

"Companies like Google, as well as their counterparts around the world, must consider the consequences and demand real accountability and standards of behavior from the military agencies that seek their expertise—and from themselves."

Explore further: Drones will soon decide who to kill

Related Stories

Drones will soon decide who to kill

April 11, 2018

The US Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence (AI). This is a big step forward. Whereas current military drones are still ...

Google to open AI research centre in Paris

January 23, 2018

Google on Monday announced it will open a research centre in Paris devoted to artificial intelligence, following a meeting between the tech giant's boss and French President Emmanuel Macron.

Recommended for you

What can snakes teach us about engineering friction?

May 21, 2018

If you want to know how to make a sneaker with better traction, just ask a snake. That's the theory driving the research of Hisham Abdel-Aal, Ph.D., an associate teaching professor from Drexel University's College of Engineering ...

Flexible, highly efficient multimodal energy harvesting

May 21, 2018

A 10-fold increase in the ability to harvest mechanical and thermal energy over standard piezoelectric composites may be possible using a piezoelectric ceramic foam supported by a flexible polymer support, according to Penn ...

Self-assembling 3-D battery would charge in seconds

May 17, 2018

The world is a big place, but it's gotten smaller with the advent of technologies that put people from across the globe in the palm of one's hand. And as the world has shrunk, it has also demanded that things happen ever ...

7 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

SwamiOnTheMountain
1 / 5 (2) May 16, 2018
People watch too many movies.
rrwillsj
5 / 5 (2) May 16, 2018
Speaking for myself. I am gratified that a number of the people, programmers and technicians and other disciplines are publicly recognizing and opposing the militarization of AI, drones and automated systems.

These efforts to weaponize robots is political cowardice. Putting forward machines to take the blame for human psychosis.

If you ain't got the guts to admit responsibility for evil decisions? Than don't come whining to me when vengeance comes hunting your sorry ass!
mosahlah
not rated yet May 16, 2018
My hope is that a California based company does not lead the US in military technology. But, if we're going to hope to counter the 25% of R&D China is investing in AI, we better hurry.
Da Schneib
not rated yet May 16, 2018
Interesting way to try to end-around the ethical point that a human needs to approve an order to kill. Reduce it to a yes/no decision then take that decision out of the hands of humans. If legitimate and ethical military organizations don't, illegitimate and unethical ones will.
aksdad
1 / 5 (1) May 18, 2018
So they are opposed to the use of artificial intelligence to improve targeting, which still requires human oversight. They'd rather we make more mistakes than less. Brilliant thinking. A Luddite approach to warfare. Let's not use technology to limit loss of life and collateral damage and use older, more barbaric and less discriminating methods to take out enemies. Because carpet-bombing is so much more humane.
aksdad
1 / 5 (1) May 18, 2018
The U.S. military has stringent protocols for using armed drones. Final decisions to kill a target are made by people—not computers—after reviewing intelligence from the ground as well as surveillance footage from drones.

The AI in question is being developed to improve those decisions and reduce mistakes. Reducing inadvertent casualties is a goal that no reasonable person would object to. Likewise, reasonable and moral people object to war, but objecting to it doesn't stop our enemies from trying to kill us. Anyone remember 9/11? Al Qaeda?

Continuing to develop and improve technology to limit our enemies' ability and desire to kill us reduces loss of life and refining offensive weapons to reduce accidental killing also reduces loss of life, both of which are moral and worthwhile goals.
antialias_physorg
5 / 5 (1) May 18, 2018
My hope is that a California based company does not lead the US in military technology. But, if we're going to hope to counter the 25% of R&D China is investing in AI, we better hurry.

...or people could put a bit more effort into not having wars.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.