How much all-seeing AI surveillance is too much?

July 3, 2018 by Matt O'brien
How much all-seeing AI surveillance is too much?
In this April 23, 2018, photo, Ashley McManus, global marketing director of the Boston-based artificial intelligence firm, Affectiva, demonstrates facial recognition technology that is geared to help detect driver distraction, at their offices in Boston. Recent advances in AI-powered computer vision have spawned startups like Affectiva, accelerated the race for self-driving cars and powered the increasingly sophisticated photo-tagging features found on Facebook and Google. (AP Photo/Elise Amendola)

When a CIA-backed venture capital fund took an interest in Rana el Kaliouby's face-scanning technology for detecting emotions, the computer scientist and her colleagues did some soul-searching—and then turned down the money.

"We're not interested in applications where you're spying on people," said el Kaliouby, the CEO and co-founder of the Boston startup Affectiva. The company has trained its artificial intelligence systems to recognize if individuals are happy or sad, tired or angry, using a photographic repository of more than 6 million faces.

Recent advances in AI-powered computer vision have accelerated the race for self-driving cars and powered the increasingly sophisticated photo-tagging features found on Facebook and Google. But as these prying AI "eyes" find new applications in store checkout lines, police body cameras and war zones, the tech companies developing them are struggling to balance business opportunities with difficult moral decisions that could turn off customers or their own workers.

El Kaliouby said it's not hard to imagine using real-time face recognition to pick up on dishonesty—or, in the hands of an authoritarian regime, to monitor reaction to political speech in order to root out dissent. But the small firm, which spun off from an MIT research lab, has set limits on what it will do.

The company has shunned "any security, airport, even lie detection stuff," el Kaliouby said. Instead, Affectiva has partnered with automakers trying to help tired-looking drivers stay awake, and with consumer brands that want to know if people respond to a product with joy or disgust.

In this April 23, 2018, photo, Rana el Kaliouby, CEO of the Boston-based artificial intelligence firm, Affectiva, demonstrates their facial recognition technology, in Boston. Recent advances in AI-powered computer vision have spawned startups like Affectiva, accelerated the race for self-driving cars and powered the increasingly sophisticated photo-tagging features found on Facebook and Google. (AP Photo/Elise Amendola)

Such queasiness reflects new qualms about the capabilities and possible abuses of all-seeing, always watching AI camera systems—even as authorities are growing more eager to use them.

In the immediate aftermath of Thursday's deadly shooting at a newspaper in Annapolis, Maryland, police said they turned to face recognition to identify the uncooperative suspect. They did so by tapping a state database that includes mug shots of past arrestees and, more controversially, everyone who registered for a Maryland driver's license.

Initial information given to law enforcement authorities said that police had turned to facial recognition because the suspect had damaged his fingerprints in an apparent attempt to avoid identification. That report turned out to be incorrect and police said they used facial recognition because of delays in getting fingerprint identification.

In June, Orlando International Airport announced plans to require face-identification scans of passengers on all arriving and departing international flights by the end of this year. Several other U.S. airports have already been using such scans for some, but not all, departing international flights.

Chinese firms and municipalities are already using intelligent cameras to shame jaywalkers in real time and to surveil ethnic minorities , subjecting some to detention and political indoctrination. Closer to home, the overhead cameras and sensors in Amazon's new cashier-less store in Seattle aim to make shoplifting obsolete by tracking every item shoppers pick up and put back down.

In this April 23, 2018, photo, Rana el Kaliouby, CEO of the Boston-based artificial intelligence firm, Affectiva, demonstrates their facial recognition technology, in Boston. Recent advances in AI-powered computer vision have spawned startups like Affectiva, accelerated the race for self-driving cars and powered the increasingly sophisticated photo-tagging features found on Facebook and Google. (AP Photo/Elise Amendola)

Concerns over the technology can shake even the largest tech firms. Google, for instance, recently said it will exit a defense contract after employees protested the military application of the company's AI technology. The work involved computer analysis of drone video footage from Iraq and other conflict zones.

Similar concerns about government contracts have stirred up internal discord at Amazon and Microsoft. Google has since published AI guidelines emphasizing uses that are "socially beneficial" and that avoid "unfair bias."

Amazon, however, has so far deflected growing pressure from employees and privacy advocates to halt Rekognition, a powerful face-recognition tool it sells to police departments and other government agencies.

Saying no to some work, of course, usually means someone else will do it. The drone-footage project involving Google, dubbed Project Maven, aimed to speed the job of looking for "patterns of life, things that are suspicious, indications of potential attacks," said Robert Work, a former top Pentagon official who launched the project in 2017.

While it hurts to lose Google because they are "very, very good at it," Work said, other companies will continue those efforts.

How much all-seeing AI surveillance is too much?
In this April 23, 2018, photo, Rana el Kaliouby, CEO of the Boston-based artificial intelligence firm, Affectiva, poses in Boston. Affectiva builds face-scanning technology for detecting emotions, but its founders decline business opportunities that involve spying on people. (AP Photo/Elise Amendola)

Commercial and government interest in computer vision has exploded since breakthroughs earlier in this decade using a brain-like "neural network" to recognize objects in images. Training computers to identify cats in YouTube videos was an early challenge in 2012. Now, Google has a smartphone app that can tell you which breed.

A major research meeting—the annual Conference on Computer Vision and Pattern Recognition, held in Salt Lake City in June—has transformed from a sleepy academic gathering of "nerdy people" to a gold rush business expo attracting big companies and government agencies, said Michael Brown, a computer scientist at Toronto's York University and a conference organizer.

Brown said researchers have been offered high-paying jobs on the spot. But few of the thousands of technical papers submitted to the meeting address broader public concerns about privacy, bias or other ethical dilemmas. "We're probably not having as much discussion as we should," he said.

Startups are forging their own paths. Brian Brackeen, the CEO of Miami-based facial recognition software company Kairos, has set a blanket policy against selling the technology to law enforcement or for government surveillance, arguing in a recent essay that it "opens the door for gross misconduct by the morally corrupt."

Boston-based startup Neurala, by contrast, is building software for Motorola that will help police-worn body cameras find a person in a crowd based on what they're wearing and what they look like. CEO Max Versace said that "AI is a mirror of the society," so the company only chooses principled partners.

"We are not part of that totalitarian, Orwellian scheme," he said.

Explore further: Facial recognition was key in identifying US shooting suspect

Related Stories

Amazon urged not to sell facial recognition tool to police

May 22, 2018

The American Civil Liberties Union and other privacy advocates are asking Amazon to stop marketing a powerful facial recognition tool to police, saying law enforcement agencies could use the technology to "easily build a ...

Recommended for you

Uber filed paperwork for IPO: report

December 8, 2018

Ride-share company Uber quietly filed paperwork this week for its initial public offering, the Wall Street Journal reported late Friday.

14 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

rderkis
1 / 5 (3) Jul 03, 2018
It's good to see individual companies making decisions on what's best for our military. I just wonder if next time there is a terrorist attack that kills thousands of Americans, if they will accept at least partial responsibility for not helping to prevent it.
Our military, CIA, FBI AND etc are controlled by us as a country and are the good guys. Give them all the tools they need to apprehend the bad guys and keep our loved ones safe.
SCVGoodToGo
1 / 5 (1) Jul 03, 2018
Any is too much.
rrwillsj
3.7 / 5 (3) Jul 03, 2018
rderkis, you might want to consider what happens to you and your family when government agents decide that you should be targeted.
https://en.wikipe...came_...

You might want to consider the danger to you and your family becoming collateral damage. Because you were friendly with a politically suspect neighbor. Because you inadvertently attended a public assembly and afterwards, you were identified as one of the suspected agitators.
Because when you were in college, one of your instructors, is now listed as a subversive and every student she had is suspect.

This recent article tells about a guy fired because of a manager's error. Resulted in the company's computer systems, following automated procedures, terminated his employment. And the company's human management were 'helpless' to prevent, retract, overturn the decision.
https://phys.org/...-ai.html

A mighty slippery slope you are headed down, blindfolded!
chemhaznet1
1 / 5 (4) Jul 03, 2018
rrwillisj, you might want to consider what happens to you and your family when government agents decide that you should be targeted because of certain posts you leave online that imply that citizens and companies of the United States of America shouldn't cooperate with their own government agencies because of your own paranoia.
rderkis
1 / 5 (3) Jul 03, 2018
rrwillisj, The government has saved my life and those of my loved ones and YOUR loved ones more times than I can count.
As far as me and my loved ones being targeted, while that is possible it has NOT happened but there saving of me and my loved ones has happened.
A bird in the hand is worth two in the bush.
Be vigilant and watchful for the misuse of government but don't cripple them out of fear of our inability to monitor them. And I had rather MY government make decisions about what is best for me rather than some greedy corporation.
TheGhostofOtto1923
1 / 5 (1) Jul 04, 2018
"Such queasiness reflects new qualms about the capabilities and possible abuses of all-seeing, always watching AI camera systems—even as authorities are growing more eager to use them."

-Yeah. Because human law enforcement is still out there trying to do this themselves, and doing it badly. Scofflaws would rather preserve their chances at avoiding detection, because it is human nature and their god-given right. Literally, as in the case of jihadis.

And so we are supposed to believe that traffic cops chasing down speeders is somehow morally superior to traffic cams sending them tickets in the mail, even though it's far more dangerous and costly.
TheGhostofOtto1923
1 / 5 (2) Jul 04, 2018
rderkis, you might want to consider what happens to you and your family when government agents decide that you should be targeted
Yes and who fears detection more than the psychopath whose raison detre is victimizing and getting away with it?
vernAtWork
not rated yet Jul 04, 2018
The article suggests that there is a choice. Point to ANY evidence whatsoever that AI growth and development is under societal control, and I would very impressed indeed. Clearly its growth and development is rapid evolutionary: Downhill with no brakes applied.
FredJose
1 / 5 (2) Jul 05, 2018
Most probably, AI will be used for the ultimate human surveillance:

Revelation 13:
16: And the second beast required all people small and great, rich and poor, free and slave, to receive a mark on their right hand or on their forehead,
17: so that no one could buy or sell unless he had the mark — the name of the beast or the number of its name.

This is where it is really heading but there's no reason to get all worked up about it since the ultimate solution has already been set in motion:
Joel 2:
31 The sun will be turned to darkness and the moon to blood before the coming of the great and awesome day of the LORD.
32 And everyone who calls on the name of the LORD will be saved, for on Mount Zion and in Jerusalem there will be deliverance, as the LORD promised, among the remnant called by the LORD.
FredJose
1 / 5 (3) Jul 05, 2018
Strange how people can immediately see that the human intelligence far-surpasses that of even the most sophisticated AI system, YET:
They also want to claim that the human brain developed by accident, via random trial and error mutations and natural selection.
Where does all the intelligence (an abstract entity) come from? How does purely random material processes give rise to abstract information and logic?
If you believe in darwinian evolution you need to have a humongous amount of faith. That there is exactly the religion you despise so much....!
SCVGoodToGo
1 / 5 (1) Jul 05, 2018
What created your creator, Fred? by your logic something even higher must have created your creator because it couldn't have just popped into existence via random trial and error mutations and natural selection.
KBK
not rated yet Jul 05, 2018
If it is a piece of software, rest assured it is already stolen and in use. Being upgraded, being modified, being patched up, and so on...as we speak and write, it is being done.

No doubt abut it at all. Odds of being wrong on this are very low...

~~~~~~~~~~

Facial recognition is very competent these days and AI packages are very capable these days.

It may even be that outfits like the CIA want this software as a cover for far more sophisticated software that they already have.

Then they could futz around with this stuff in the public arena, regarding laws, usage, appearances of compliance and so on.... while the real deal, software they've been developing for over a decade.......steams along full speed ahead - in the backdrop.

It would not surprise me in the slightest, and would be the kind of move they'd pull.

Their job is to be devious, cunning and intelligent, all out of your capacity for sight and thinking.... and they are that indeed.
rderkis
1 / 5 (1) Jul 05, 2018
Quote/ Their job is to be devious, cunning and intelligent, all out of your capacity for sight and thinking.... and they are that indeed.
/Quore

GOOD, that is what we pay them the big bucks for.

TheGhostofOtto1923
not rated yet Jul 07, 2018
32 And everyone who calls on the name of the LORD will be saved, for on Mount Zion and in Jerusalem there will be deliverance, as the LORD promised, among the remnant called by the LORD.
Yeah, well, since he promised to return in a gen and that was 2000 years ago, you people get impatient and blame all the heathens for his reluctance to return. And you begin ridding the earth of them in hopes that jesus will appreciate the gesture.

Know why the pope declared that hell doesnt exist? He got tired trying to explain how the god of infinite mercy, compassion and forgiveness could nevertheless condemn people to an eternity of horrific suffering and torture merely because they couldn't believe in him.

Jews were an especially hard sell.

I mean, how could you get ecumenical with people who believed that shit?

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.