The new digital divide is between people who opt out of algorithms and people who don't

The new digital divide is between people who opt out of algorithms and people who don't
Do you know what happens when you share your data? Credit: mtkang/shutterstock.com

Every aspect of life can be guided by artificial intelligence algorithms – from choosing what route to take for your morning commute, to deciding whom to take on a date, to complex legal and judicial matters such as predictive policing.

Big tech companies like Google and Facebook use AI to obtain insights on their gargantuan trove of detailed customer data. This allows them monetize users' collective preferences through practices such as micro-targeting, a strategy used by advertisers to narrowly target specific sets of users.

In parallel, many people now trust platforms and algorithms more than their own governments and civic society. An October 2018 study suggested that people demonstrate "algorithm appreciation," to the extent that they would rely on advice more when they think it is from an than from a human.

In the past, technology experts have worried about a "digital divide" between those who could access computers and the internet and those who could not. Households with less access to are at a disadvantage in their ability to earn money and accumulate skills.

But, as proliferate, the divide is no longer just about access. How do people deal with and the plethora of algorithmic decisions that permeate every aspect of their lives?

The savvier users are navigating away from devices and becoming aware about how algorithms affect their lives. Meanwhile, consumers who have less information are relying even more on algorithms to guide their decisions.

The secret sauce behind artificial intelligence

The main reason for the new digital divide, in my opinion as someone who studies , is that so few people understand how algorithms work. For a majority of users, algorithms are seen as a black box.

AI algorithms take in data, fit them to a and put out a prediction, ranging from what songs you might enjoy to how many years someone should spend in jail. These models are developed and tweaked based on past data and the success of previous models. Most people – even sometimes the algorithm designers themselves – do not really know what goes inside the model.

Researchers have long been concerned about algorithmic fairness. For instance, Amazon's AI-based recruiting tool turned out to dismiss female candidates. Amazon's system was selectively extracting implicitly gendered words – words that men are more likely to use in everyday speech, such as "executed" and "captured."

Other studies have shown that judicial algorithms are racially biased, sentencing poor black defendants for longer than others.

As part of the recently approved General Data Protection Regulation in the European Union, people have "a right to explanation" of the criteria that algorithms use in their decisions. This legislation treats the process of algorithmic decision-making like a recipe book. The thinking goes that if you understand the recipe, you can understand how the algorithm affects your life.

Meanwhile, some AI researchers have pushed for algorithms that are fair, accountable and transparent, as well as interpretable, meaning that they should arrive at their decisions through processes that humans can understand and trust.

The new digital divide is between people who opt out of algorithms and people who don't
Should you stay connected – or unplug? Credit: pryzmat/shutterstock.com

What effect will transparency have? In one study, students were graded by an algorithm and offered different levels of explanation about how their peers' scores were adjusted to to get to a final grade. The students with more transparent explanations actually trusted the algorithm less. This, again, suggests a digital divide: Algorithmic awareness does not lead to more confidence in the system.

But transparency is not a panacea. Even when an algorithm's overall process is sketched out, the details may still be too complex for users to comprehend. Transparency will help only users who are sophisticated enough to grasp the intricacies of algorithms.

For example, in 2014, Ben Bernanke, the former chair of the Federal Reserve, was initially denied a mortgage refinance by an automated system. Most individuals who are applying for such a mortgage refinance would not understand how algorithms might determine their creditworthiness.

Opting out of the new information ecosystem

While algorithms influence so much of people's lives, only a tiny fraction of participants are sophisticated enough to fully engage in how algorithms affect their life.

There are not many statistics about the number of people who are algorithm aware. Studies have found evidence of algorithmic anxiety, leading to a deep imbalance of power between platforms that deploy algorithms and the users who depend on them.

A study of Facebook usage found that when participants were made aware of Facebook's algorithm for curating news feeds, about 83% of participants modified their behavior to try to take advantage of the algorithm, while around 10% decreased their usage of Facebook.

A November 2018 report from the Pew Research Center found that a broad majority of the public had significant concerns about the use of algorithms for particular uses. It found that 66% thought it would not be fair for algorithms to calculate personal finance scores, while 57% said the same about automated resume screening.

A small fraction of individuals exercise some control over how algorithms use their personal data. For example, the Hu-Manity platform allows users an option to control how much of their data is collected. Online encyclopedia Everipedia offers users the ability to be a stakeholder in the process of curation, which means that users can also control how information is aggregated and presented to them.

However, a vast majority of platforms do not provide either such flexibility to their end users or the right to choose how the algorithm uses their preferences in curating their news feed or in recommending them content. If there are options, users may not know about them. About 74% of Facebook's users said in a survey that they were not aware of how the platform characterizes their personal interests.

In my view, the new digital literacy is not using a computer or being on the internet, but understanding and evaluating the consequences of an always-plugged-in lifestyle.

This lifestyle has a meaningful impact on how people interact with others; on their ability to pay attention to new information; and on the complexity of their decision-making processes.

Increasing algorithmic anxiety may also be mirrored by parallel shifts in the economy. A small group of individuals are capturing the gains from automation, while many workers are in a precarious position.

Opting out from algorithmic curation is a luxury – and could one day be a symbol of affluence available to only a select few. The question is then what the measurable harms will be for those on the wrong side of the .


Explore further

Your period tracking app could tell Facebook when you're pregnant. An 'algorithmic guardian' could stop it

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: The new digital divide is between people who opt out of algorithms and people who don't (2019, April 17) retrieved 22 May 2019 from https://phys.org/news/2019-04-digital-people-opt-algorithms-dont.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
22 shares

Feedback to editors

User comments

Apr 17, 2019
The students with more transparent explanations actually trusted the algorithm less.


Compare and contrast to how many people think self-driving cars are better than people at driving.

The less you understand what's going on, the more people tend to anthropomorphize computers, especially if someone told them it's "intelligent" or AI. This leads to people assuming the computer or the algorithm is somehow reasonable in the same sense as people are, that it is unlikely to make gross errors or respond in completely inappropriate ways.

The algorithmic systems such as Tesla's Autopilot are in reality operating to a great extent on good fortune, and because auto-accidents are comparatively rare, it takes a long time and a great number of people killed to arrive at the statistical evidence that they are in fact deadly dangerous to their users. However, it then takes even longer for the company and its lawyers to admit it, because they can appeal to "upgrades" in the system

Apr 17, 2019
Put simply:

If someone told you your car stays on the lane merely by identifying white color around particular pixels in a video stream, you'd be terrified to let go of the wheel because you'd understand that anything and everything can easily go wrong with it.

If instead they told you your car has "adaptive driving assistant based on artificial intelligence and advanced computer vision", you'd assume it has more intelligence and deductive powers to the point that it's actually driving the car for you, rather than just bumping the steering wheel away from the shoulder if it happens to detect it.

Both cases are more or less accurately describing the same system. It's just that the "AI" is actually doing far less than people believe it does, because they have no idea what is involved.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more