Opinion: Should algorithms be regulated?

January 3, 2017 by Daniel Saraga

Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?

Yes, says Markus Ehrenmann of Swisscom.

The current progress being made in processing and in machine learning is not always to our advantage. Some algorithms are already putting people at a disadvantage today and will have to be regulated.

For example, if a recognises an obstacle in the road, the control has to decide whether it will put the life of its passengers at risk or endanger uninvolved passers-by on the pavement. The on-board computer takes decisions that used to be made by people. It's up to the state to clarify who must take responsibility for the consequences of automated decisions (so-called 'algorithmic accountability'). Otherwise, it would render our legal system ineffective.

In many states in the USA, programs help to decide the length of prison sentences given to criminals. This enables the state to lower the recidivism rate and prison costs – but only on average. In individual cases, the judgements passed by the decision-making algorithms can be disastrously wrong – such as when skin colour or place of residence are used as input variables.

Searching for the concepts 'professional hairstyle' and 'unprofessional hairstyle' in the US version of Google will bring up images of light-skinned women and dark-skinned women respectively (in accordance with the 'algorithmic bias'). The data pool that the algorithms use to make their decisions is not always correct. Even if the algorithms use a large number of texts as a basis for their decisions, cultural factors still cannot be eliminated. Stereotypes discriminate. Furthermore, data always refers to the past, and thus only allows for limited assertions about the future.

People have a right to an explanation about the decisions that affect them. And they have a right not to be discriminated against. This is why we have to be in a position to comprehend the decision-making processes of algorithms and, where necessary, to correct them. The same also applies to the ranking mechanisms of the big social networks. What's dangerous about them is not their biased selection of media reports, but the fact that their system's mode of operation remains hidden from us. Public and private organisations are already working on solutions for the 'debiasing' of algorithms and on models to monitor them. Even though the big advantages of innovation in artificial intelligence mustn't be stifled, our rights still have to be protected. The EU Data Privacy Act, which will come into force in 2018, offers a sensible, proportionate form of regulation.

No, says Mouloud Dey of SAS.

We need to be able to audit any algorithm potentially open to inappropriate use. But creativity can't be stifled nor research placed under an extra burden. Our hand must be measured and not premature. Creative individuals must be allowed the freedom to work, and not assigned bad intentions a priori. Likewise, before any action is taken, the actual use of an algorithm must be considered, as it is generally not the computer program at fault but the way it is used.

It's the seemingly mysterious, badly intentioned and quasi-automatic algorithms that are often apportioned blame, but we need to look at the entire chain of production, from the programmer and the user to the managers and their decisions. We can't throw the baby out with the bathwater: an algorithm developed for a debatable use, such as military drones, may also have an evidently useful application which raises no questions.

We may criticise Google's management of our data, but it would have been a huge shame if the company had folded 20 years ago because of unresolved privacy and data protection issues. New legislation may not even be required. Take, for example, Pokemon Go: the law already prohibits me from endangering other people's lives by playing it.

There are also obstacles to introducing a regulator: the complexity of the mandate, the burden on innovation and the behind-the-times nature of its work, which results from the excessive speed of technological progress. Users must also play their part. I may work in the digital sector, but I'm not on Facebook, as I don't see its utility. You will, however, find me on LinkedIn, despite its algorithms not differing fundamentally.

Citizens should know how algorithms affect them. But let's be frank: the average mortal is not capable of verifying one. In the end, others must be trusted to do so for us. In this market particularly, self-regulation can succeed, given the proximity of clients to companies and the enormous pressure they wield upon them. It's a company's responsibility to explain very clearly how a system works. Once again, problems arise from the use of a program, not its mere existence. Mouloud Dey is the director of Innovation and Business Solutions at SAS France and a member of the Scientific Council of the Data ScienceTech Institute at the Nice Sophia Antipolis University.

Explore further: Here's how we can protect ourselves from the hidden algorithms that influence our lives

Related Stories

Removing gender bias from algorithms

September 26, 2016

Machine learning is ubiquitous in our daily lives. Every time we talk to our smartphones, search for images or ask for restaurant recommendations, we are interacting with machine learning algorithms. They take as input large ...

NIST asks public to help future-proof electronic information

December 21, 2016

The National Institute of Standards and Technology (NIST) is officially asking the public for help heading off a looming threat to information security: quantum computers, which could potentially break the encryption codes ...

Better search engine results thanks to new method

May 12, 2016

How does Google decide which search results to display? Doctoral candidate Anne Schuth developed a new method by which dozens or even hundreds of search algorithms can be compared with each other simultaneously. This means ...

Recommended for you

Scientists write 'traps' for light with tiny ink droplets

October 23, 2017

A microscopic 'pen' that is able to write structures small enough to trap and harness light using a commercially available printing technique could be used for sensing, biotechnology, lasers, and studying the interaction ...

When words, structured data are placed on single canvas

October 22, 2017

If "ugh" is your favorite word to describe entering, amending and correcting data on the rows and columns on spreadsheets you are not alone. Coda, a new name in the document business, feels it's time for a change. This is ...

Enhancing solar power with diatoms

October 20, 2017

Diatoms, a kind of algae that reproduces prodigiously, have been called "the jewels of the sea" for their ability to manipulate light. Now, researchers hope to harness that property to boost solar technology.

2 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

BackBurner
not rated yet Jan 08, 2017
Certainly, if there is no commercial use of an algorithm, no "product", there's no liability, but where a product is involved and there is commercial use then there is associated product liability under existing law. Exactly where that liability lies probably isn't relevant. In the example of the driverless car given in this article, will it be obvious that a pedestrian was injured due to a sensor failure or an algorithm failure? Why would it matter to the injured party, or the court? The car itself is the product and the manufacturer of the car is liable for it's fitness. Only a post hoc investigation will assign blame, but that's an investigation made by the liable company with the obvious goal of either suing it's subcontractors or correcting the problem. t doesn't relieve the systems integrator of liability.

This is a fabrication. No further legislation is needed.
xponen
not rated yet Jan 08, 2017
@BackBurner
To regulate algorithm is very important. Have you ever heard of anyone holding (software) developers liable for damage? No, because they have waived their right when they agreed to the T&T of the service. The algorithm (specifically consumer version) is immune to liability law, as agreed by user. The state in other hand is responsible for the public good (common good), so the aim of the regulation is for this purpose; it don't care if you waived your right, instead it argue about public good & the damage to society if the algorithm malfunction.

Suing (software) product maker for liability is a waste of time, especially as user... unless a regulation is backing you.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.