New algorithm limits bias in machine learning

July 16, 2018 by Jenna Marshall, Santa Fe Institute
Credit: CC0 Public Domain

Machine learning—a form of artificial intelligence based on the idea that computers can learn from data and make determinations with little help from humans—has the potential to improve our lives in countless ways. From self-driving cars to mammogram scans that can read themselves, machine learning is transforming modern life.

It's easy to assume that using algorithms for decision-making removes human from the equation. But researchers have found that can produce unfair determinations in certain contexts, such as hiring someone for a job. For example, if the data plugged into the suggest men are more productive than women, the machine is likely to "learn" that difference and favor male candidates over female ones, missing the bias of the input. And managers may fail to detect the machine's discrimination, thinking that an automated decision is an inherently neutral one, resulting in unfair hiring practices.

In a new paper published in the Proceedings of the 35th Conference on Machine Learning, SFI Postdoctoral Fellow Hajime Shimao and Junpei Komiyama, a research associate at the University of Tokyo, offer a way to ensure fairness in machine learning. They've devised an algorithm that imposes a fairness constraint that prevents bias.

"So say the credit card approval rate of black and white [customers] cannot differ more than 20 percent. With this kind of constraint, our algorithm can take that and give the best prediction of satisfying the constraint," Shimao says. "If you want the difference of 20 percent, tell that to our machine, and our machine can satisfy that constraint."

That ability to precisely calibrate the constraint allows companies to ensure they comply with federal non-discrimination laws, adds Komiyama. The team's algorithm "enables us to strictly control the level of fairness required in these legal contexts," Komiyama says.

Correcting for bias involves a trade off, though, Shimao and Komiyama note in the study. Because the constraint can affect how the machine reads other aspects of the data, it can sacrifice some of the machine's predictive power.

Shimao says he would like to see businesses use the algorithm to help root out the hidden discrimination that may be lurking in their machine learning programs. "Our hope is that it's something that can be used so that machines can be prevented from discrimination whenever necessary."

Explore further: Fairness needed in algorithmic decision-making, experts say

Related Stories

Fairness needed in algorithmic decision-making, experts say

May 2, 2018

University of Toronto Ph.D. student David Madras says many of today's algorithms are good at making accurate predictions, but don't know how to handle uncertainty well. If a badly calibrated algorithm makes the wrong decision, ...

Recommended for you

Privacy becomes a selling point at tech show

January 7, 2019

Apple is not among the exhibitors at the 2019 Consumer Electronics Show, but that didn't prevent the iPhone maker from sending a message to attendees on a large billboard.

China's Huawei unveils chip for global big data market

January 7, 2019

Huawei Technologies Ltd. showed off a new processor chip for data centers and cloud computing Monday, expanding into new and growing markets despite Western warnings the company might be a security risk.

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.