Programming and prejudice: Computer scientists discover how to find bias in algorithms

Programming and prejudice
Suresh Venkatasubramanian, an associate professor in the University of Utah's School of Computing, leads a team of researchers that have discovered a technique to determine if algorithms used for tasks such as hiring or administering housing loans could in fact discriminate unintentionally. The team also has discovered a way to fix such errors if they exist. Their findings were recently revealed at the 21st Association for Computing Machinery's SIGKDD Conference on Knowledge Discovery and Data Mining in Sydney, Australia. Credit: University of Utah College of Engineering

Software may appear to operate without bias because it strictly uses computer code to reach conclusions. That's why many companies use algorithms to help weed out job applicants when hiring for a new position.

But a team of computer scientists from the University of Utah, University of Arizona and Haverford College in Pennsylvania have discovered a way to find out if an algorithm used for hiring decisions, loan approvals and comparably weighty tasks could be biased like a human being.

The researchers, led by Suresh Venkatasubramanian, an associate professor in the University of Utah's School of Computing, have discovered a technique to determine if such software programs discriminate unintentionally and violate the legal standards for fair access to employment, housing and other opportunities. The team also has determined a method to fix these potentially troubled algorithms.

Venkatasubramanian presented his findings Aug. 12 at the 21st Association for Computing Machinery's Conference on Knowledge Discovery and Data Mining in Sydney, Australia.

"There's a growing industry around doing resume filtering and resume scanning to look for , so there is definitely interest in this," says Venkatasubramanian. "If there are structural aspects of the testing process that would discriminate against one community just because of the nature of that community, that is unfair."

Machine-learning algorithms

Many companies have been using algorithms in to help filter out job applicants in the hiring process, typically because it can be overwhelming to sort through the applications manually if many apply for the same job. A program can do that instead by scanning resumes and searching for keywords or numbers (such as school ) and then assigning an overall score to the applicant.

These programs also can learn as they analyze more data. Known as , they can change and adapt like humans so they can better predict outcomes. Amazon uses similar algorithms so they can learn the buying habits of customers or more accurately target ads, and Netflix uses them so they can learn the movie tastes of users when recommending new viewing choices.

But there has been a growing debate on whether machine-learning algorithms can introduce unintentional bias much like humans do.

"The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations," Venkatasubramanian says.

Disparate impact

Venkatasubramanian's research determines if these can be biased through the legal definition of disparate impact, a theory in U.S. anti-discrimination law that says a policy may be considered discriminatory if it has an adverse impact on any group based on race, religion, gender, sexual orientation or other protected status.

Venkatasubramanian's research revealed that you can use a test to determine if the algorithm in question is possibly biased. If the test—which ironically uses another machine-learning algorithm—can accurately predict a person's race or gender based on the data being analyzed, even though race or gender is hidden from the data, then there is a potential problem for bias based on the definition of disparate impact.

"I'm not saying it's doing it, but I'm saying there is at least a potential for there to be a problem," Venkatasubramanian says.

If the test reveals a possible problem, Venkatasubramanian says it's easy to fix. All you have to do is redistribute the data that is being analyzed—say the information of the job applicants—so it will prevent the from seeing the information that can be used to create the bias.

"It would be ambitious and wonderful if what we did directly fed into better ways of doing hiring practices. But right now it's a proof of concept," Venkatasubramanian says.

Explore further

Big data algorithms can discriminate, and it's not clear what to do about it

Provided by University of Utah
Citation: Programming and prejudice: Computer scientists discover how to find bias in algorithms (2015, August 14) retrieved 24 August 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Aug 14, 2015
This supposedly improved algorithm would reliably discriminate against better people and in favor of worse people. The legal standard "disparate impact" presumes that different groups are equal in ability and that if fewer non-Asian minorities are hired than their share of the applicant pool that that is evidence of discrimination. Yet it is impossible to create a test of mental abilities that will predict job performance on which Blacks will get average scores equal to Whites or Asians. In some cases the difference can be extreme, as when comparing the GRE math scores of Asian males with Black females. The average of the former is at the 98th percentile of the latter. (or 2nd percentile the other way around). No intervention has been found to have lasting effects on intelligence, and a wide variety of interlocking evidence shows that differences in adult intelligence are 50-80% genetic, and zero% due to differences in home / edu. environment. Favoring the worse over the better is bad.

Aug 14, 2015
Algorithm (binary opposition/structuralism) is very ancient, at least we can see in the Sanskrit grammar of Panini (5th century BCE), Aristotlean logics (3rd century BCE) and then it was popularized by Persian mathematician Khwarizmi (10th century). However, Buddha (6th century BCE) perceived its limitation because these are relative terms. For example, Sun rises in the east is universal truth. However, in Buddhist perspective there is no such thing East in absolute sense. Thus, algorithm is prone to suffer from bias.

Aug 15, 2015
"Disparate impact" is not a bias because you are indeed statistically more likely to get better results in hiring if you dont try to eliminate it. The fact that unbiased computer algorithms do not care about disparate impact only further proves that it is not a bias. You may or may not consider it unfair, but calling it a bias is erroneous. It is humans such as author of this article that are truly biased, not the algorithm.

Aug 15, 2015
EWH, take your racist nonsense somewhere else.

Aug 21, 2015
If there is a difference in a trait between protected groups, then an unbiased algorithm (performed by machines or humans) will appear biased and discriminatory. How we are going to deal with that remains unclear. What is a fair solution?

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more