Artificial intelligence has a racial bias problem. Google is funding summer camps to try to change that

August 11, 2018 by Ashley Wong, Usa Today

Through connections made at summer camp, high school students Aarzu Gupta and Lili Sun used artificial intelligence to create a drone program that aims to detect wildfires before they spread too far.

Rebekah Agwunobi, a rising high school senior, learned enough to nab an internship at the Massachusetts Institute of Technology's Media Lab, working on using to evaluate the court system, including collecting data on how judges set bail.

Both projects stemmed from the Oakland, Calif.-based nonprofit AI4All, which will expand its outreach to young under-represented minorities and women with a $1 million grant from Google.org, the technology giant's philanthropic arm announced Friday.

Artificial intelligence is becoming increasingly commonplace in daily life, found in everything from Facebook's face detection feature for photos to Apple's iPhone X facial recognition.

It's also one of the more disputed parts of technology. The late astrophysicist Stephen Hawking and Tesla CEO Elon Musk have warned human civilization is at risk from the unfettered development of artificial intelligence, which could lead to autonomous weapons of terror. Such fears led staff at Google earlier this year to press the company to halt a drone contract with the Pentagon.

The technology, still in its early stages, has also been decried for built-in racial bias that can amplify existing stereotypes. That's particularly worrisome as more companies use it for decisions like hiring and police leverage artificial intelligence-powered software to identify suspects. MIT Media Lab researcher Joy Buolamwini, who is black, found software could more easily identify her face when she wore a white mask, a result of algorithms that relied on data sets of mostly white faces.

Three years ago, Google apologized after its photo identification software mislabeled black people as gorillas. Microsoft did the same after users quickly found a way to get an artificial intelligence-powered social chatbot to spew racial slurs.

Tess Posner, CEO of the nonprofit organization AI4All, said the problem is made worse by the fact that minority groups like women and people of color have historically been left out of the tech industry, particularly in AI.

"We need to have people included that are going to be impacted by these technologies, and we also need inclusion to ensure that they're developed responsibly," Posner said. "(Bias) happens when we don't have people asking the right questions from the beginning."

Despite stated efforts to attract more women and more people of color, Google, Facebook and other big tech giants have been slow to diversify their staff and they've failed to hire many women of color. African-American and Hispanic women make up no more than one percent of Silicon Valley's entire workforce.

Posner's organization believes the tech industry has to start including women and people of color at a much earlier stage. They're working to close that gap through summer camps aimed at high school students.

AI4All, launched in 2017, is based on a two-week summer camp program out of Stanford University.

Since then, AI4All's resources have expanded across the country. In its first year, there were only two summer camps at Stanford University and UC Berkeley. This year it added four more at Carnegie Mellon, Princeton, Boston University and Simon Fraser University.

All of the camps are aimed at who are women, people of color or low-income.

Part of Google.org's grant will go towards opening more AI4All camps. The ultimate goal is to use the money to create a free, online AI curriculum course that will be accessible to anyone in the world. A course is already in the works.

"We really need for AI to be made by diverse creators, and that starts with people having access to the learning opportunities to understand at its core what AI is and how it can be applied," Google.org's AI4All partnership lead Hannah Peter said.

In addition to providing summer camps, AI4All also offers three-month fellowships where students can develop their own projects and pitch them to AI experts in the industry, as well as funding for their students to launch independent initiatives.

One such initiative was AI4All alumnus Ananya Karthik's workshop, creAIte, which uses artificial intelligence to create artwork. Karthik gathered a few dozen girls on a sunny Monday afternoon at Oakland's Kapor Center to show them how to use the Deep Dream Generator program to fuse images together for a unique piece of artwork.

Other AI4All students, most of whom are still in high school, have turned their newly acquired technical skills towards current pressing issues, like the wildfire project developed by Gupta and Sun, from AI4All's 2017 and 2016 class, respectively. The two met during one of the AI4All's three-month fellowships this year. This idea came out of the Napa and Sonoma County fires that plagued northern California late last year.

The camps validated their interest in STEM careers. They also appreciated the camp's talks featuring real-world examples of minority women who were able to succeed in the industry.

"I want to initiate change using artificial intelligence," Sun said. "I don't want to be just working on an iPhone or something like that ... (AI4All) gave me real examples of people who've succeeded, which is pretty cool. I knew that I could do it."

Because of her experiences, Gupta said, she's looking forward to exploring a career in AI, particularly in its uses for health and medicine. She's already putting that interest to work with her internship this summer at UC San Francisco, where the lab she's working at is doing research on the increased risk factors for women in developing Alzheimer's disease.

Amy Jin, an AI4All 2015 alumna and a rising freshman set for Harvard University in the fall, said the program opened her eyes to all the possibilities of AI as a tool for solving real-world problems.

Using surgery videos from UCSF, Jin, along with one of her AI4All mentors, developed a program that can track a surgeon's tools, movements and hand placement to give feedback on how to improve their technique.

For Agwunobi, AI4All was instrumental in showing her how she could combine her passion for activism and social justice with her interest in technology.

At her MIT internship, Agwunobi took data gathered during the pre-trial process to evaluate how key figures like judges behave while setting bail. The goal is to arm activists with this data when pushing for bail reform and scaling back mass incarceration.

"You can work with tech and still be accountable to community solutions," Agwunobi said. "(AI4All) affirmed my desire to solve interesting problems that actually helped communities I was accountable to, rather than making me feel like I was selling out ... I think that's how I want to approach solving humanitarian problems in the future."

Explore further: Less biased facial recognition? Microsoft touts improvement, IBM offering help

Related Stories

Tech titans join to study artificial intelligence

September 29, 2016

Major technology firms have joined forces in a partnership on artificial intelligence, aiming to cooperate on "best practices" on using the technology "to benefit people and society."

Recommended for you

Permanent, wireless self-charging system using NIR band

October 8, 2018

As wearable devices are emerging, there are numerous studies on wireless charging systems. Here, a KAIST research team has developed a permanent, wireless self-charging platform for low-power wearable electronics by converting ...

Facebook launches AI video-calling device 'Portal'

October 8, 2018

Facebook on Monday launched a range of AI-powered video-calling devices, a strategic revolution for the social network giant which is aiming for a slice of the smart speaker market that is currently dominated by Amazon and ...

Artificial enzymes convert solar energy into hydrogen gas

October 4, 2018

In a new scientific article, researchers at Uppsala University describe how, using a completely new method, they have synthesised an artificial enzyme that functions in the metabolism of living cells. These enzymes can utilize ...

8 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

TheGhostofOtto1923
1 / 5 (1) Aug 11, 2018
Well the problem is obviously capitalism itself which is based on competition. As long as participants are selected for performance then there will always be inequality.

Affirmative action is fine in education where competition is irrelevant. But the only way to make it work in the marketplace, is to destroy the marketplace and replace it with socialism.

Which is probably the purpose for it in the first place.

Soviet communism died because it could not compete with a system based on competition. But communism itself is certainly not dead. Communists know that the only way for them to survive is to destroy capitalism, by any and all means.

And a tool for doing this is to make everyone equal.
TheGhostofOtto1923
1 / 5 (1) Aug 11, 2018
Odd that they use as examples, 2 minorities that are anything but 'underrepresented'.

"About twice as many Asian as white, black, or Hispanic students enter STEM fields. Completion rates are lowest for black and Hispanic students, with only 16% of those in each of these groups who enter STEM fields earning bachelor's degrees in these fields, compared to about 30% of the Asian and white students who enter these fields."

-Not odd really, as the article is politics not science.
Anda
5 / 5 (2) Aug 11, 2018
Artificial intelligence or any computer don't have any racial bias problems.
Programmers do. Yes? People do the programming... right?

But sick minds'll read this article and talk about capitalism and communism... living in the 80's.

I only see another sensationalist title as usual lately, sadly.

mqr
not rated yet Aug 11, 2018
Racism = prejudice + negative emotions

I have prejudice regarding some ethnic groups, but no hatred towards anyone. Having prejudice is just generalization based on experiences.

Racism when elevated to social policies has been so effective that race predicts things, for example: the consequences of the relationships with the police (i.e., driving as black), and the relationships with the educational system in the USA and the UK. The situation of blacks or Hispanics in the USA does not say anything bad about those ethnic groups, but it talks very loud about the way in which that society is organized. Who has organized in that way? you should answer who is responsible for that....

Segregation hatred is a central component of the USA, that is why they had selected the leaders that they currently have.
TheGhostofOtto1923
not rated yet Aug 11, 2018
Racism is an aspect of tribalism. And we are all tribalists. We might be persuaded that we all belong to a universal tribe, but as soon as we perceive others acting tribally, we naturally, subconsciously, instinctively, respond tribally.

""Primeval man", he argued, "regarded actions as good or bad solely as they obviously affected the welfare of the tribe, not of the species". Among the living tribal peoples, he added, "the virtues are practised almost exclusively in relation to the men of the same tribe" and the corresponding vices "are not regarded as crimes" if practised on other tribes (Darwin, 1871)"
ET3D
not rated yet Aug 12, 2018
Agreed with Anda about the sensationalism. While getting a wider variety of people to program AI is laudable, this isn't really a solution to AI racism. An AI to fight fires can't be racist. Just having diverse people do AI might at some point mean they'd be on a project where AI racism can happen and might think about this, but that's a really roundabout way to do things.

It would be much better if very major AI project which involves identifying people would be tested against a large, inclusive database by third parties, or that it would be forced by law to include demographic details of its database. That would be much more direct and much more effective.
rrwillsj
1 / 5 (1) Aug 12, 2018
Well otto, you are certainly not equal to me! And actually you are not even up to qualify as vermin.

So therefore, by your own claimed standards, I have the "Right". Nay, "Duty" to deprive you of your property rights. Your civil rights. Your bogus claim to be human provides me the justification to deprive you your human rights.

Cause I wouldn't want to have to accuse you of hypocrisy. Listening to your supplication for special treatment by treating you with any more consideration than you treat others.
mrburns
not rated yet Aug 12, 2018
affirmative action programming will retard progress and impoverish society. moreover the chosen oppressed identity group programmers will have to validated and funded no matter what crap they produce. A passion for social justice and activism is just shorthand for resentment and envy, which is all the left has ever had to offer. By fueling such negative and destructive character flaws Google ventures ever deeper into the realm of evil. They know perfectly well that it was merit, competition and capitalism that built Google but now they deny all of that for Marxist crap.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.