Predictive algorithms are no better at telling the future than a crystal ball

Predictive algorithms are no better at telling the future than a crystal ball
The global market for predictive analytics is growing. Credit: Shutterstock

An increasing number of businesses invest in advanced technologies that can help them forecast the future of their workforce and gain a competitive advantage.

Many analysts and professional practitioners believe that, with enough data, algorithms embedded in People Analytics (PA) applications can predict all aspects of employee behavior: from productivity, to engagement, to interactions and emotional states.

Predictive analytics powered by algorithms are designed to help managers make decisions that favourably impact the bottom line. The global market for this technology is expected to grow from US$3.9 billion in 2016 to US$14.9 billion by 2023.

Despite the promise, predictive algorithms are as mythical as the crystal ball of ancient times.

Predictive models are based on flawed reasoning

One of the fundamental flaws of predictive algorithms is their reliance on "inductive reasoning". This is when we draw conclusions based on our knowledge of a small sample, and assume that those conclusions apply across the board.

For example, a manager might observe that all of her employees with an MBA are highly motivated. She therefore concludes that all workers with an MBA are highly motivated.

This conclusion is flawed because it assumes that past patterns will remain consistent. This assumption itself can only be true because of our experience to date, which confirms this consistency. In other words, inductive reasoning can only be inductively justified: it works because it has worked before. Therefore, there is no logical reason to assume that the next person our company hires who has an MBA degree will be highly motivated.

Assumptions like these can be coded into hiring algorithms, which, in this case, would assign a weighting to all job applicants with an MBA degree. But when inductive reasoning is baked into the code of hiring applications, it can lead to unfounded decisions, adversely impact on the bottom-line, and even discriminate against certain groups of people.

For example, a tool used in some parts of the United States to assess whether a person arrested for a crime would re-offend was found to unfairly discriminate against African Americans.

They lead to self-fulfilling prophecies

Another flaw in the predictions thrown up by algorithmic analysis is their propensity to create self-fulfilling prophecies. Acting on algorithmic predictions, managers can create the conditions that ultimately realise those very predictions.

For example, a company may use an algorithm to predict the performance of its recently-hired salespeople. Such an algorithm might draw on data from standardised tests completed during their onboarding process, reviews from previous employers, and demographics. This analysis can then be used to rank new salespeople and justify the allocation of more training resources to those believed to have greater performance potential.

This is likely to produce the very results that the initial analysis predicted. The higher-ranked recruits will perform better than those ranked lower on the list because they have been given superior training opportunities.

Calculating probabilities of future events is meaningless

Some practitioners recognise the flaws in the predictive capability of algorithmic systems, but they still see value in generating models that indicate probability.

Rather than predicting the occurrence of future events or states, probabilistic models can indicate the degree of certainty that events or situations might occur in the future.

However, here too it pays to be a little sceptical. When a calculates that an event is likely to happen it does so as a percentage of 100% certainty. Any probabilistic prediction is only possible in relation to the possibility of complete certainty. But since complete certainty is impossible to predict, probabilistic models are of no real significance either.

Algorithms don't 'predict', they 'extrapolate'

So if they cannot predict organisational events with complete or even probable certainty, what can predictive algorithms do?

To answer this, we must understand how they work. Once developed and inscribed with their base code, predictive algorithms need to be "trained" to hone their predictive power. This is done by feeding them with past organisational data. They then search for trends in the data and extrapolate rules that can be applied to future data.

For example, workforce planning algorithms can identify employees who are likely to resign. They do this by analysing the personality and behavioural patterns of employees who have resigned in the past and cross-referencing the results with the profiles of existing employees to identify those with the highest matching scores. With each round of application, the is continually adjusted to correct ever-decreasing prediction errors.

However, the term "prediction error" is misleading because these algorithms do not predict, but rather extrapolate. All that predictive algorithms can ever do is guess at what is going to happen based on what has already happened. The leap required to make actual predictions is not a matter of computing power, but rather of bending the laws of physics.

Predictive models can't anticipate change

Because they are extrapolative, are rather good at identifying regularities, continuity and routine. However, the human brain is also designed to identify stable patterns. Competent managers should be well aware of their organisation's operations, and capable of envisioning steady patterns over time.

What managers find difficult to predict is change. Unfortunately, predictive models are also poor predictors of change. The more radical change is – different from existing patterns – the more poorly predicted it will be.

To manage effectively and develop their knowledge of current and likely organisational events, managers need to learn to build and trust their instinctual awareness of emerging processes rather than rely on algorithmic promises that cannot be realised. The key to effective decision-making is not algorithmic calculations but intuition.

Explore further

Can math predict what you'll do next?

Provided by The Conversation

This article was originally published on The Conversation. Read the original article.The Conversation

Citation: Predictive algorithms are no better at telling the future than a crystal ball (2018, February 12) retrieved 21 September 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Feb 12, 2018
My opinion of fortune-tellers is that they do not predict the future. No matter what delusions they gimmick up to sell their clients. Actually what psychic-reader's do, is a primitive, reductive form of psychotherapy.

All these programs, written by fallible people? Are just faster, fancier and way more expensive forms of engaging in wishful thinking. Resulting in trapping your organization in a logic oroborus.

Feb 12, 2018
Such software products are meant to sell the illusion that the financial sector is based on scientific principles, making people engage in good faith in financial activities as the financial sector wants them to.
For those with even a minimal understanding of the scientific method all these approaches to understanding finances are laughable.
The really scientific world is not involving in clearing these subjects because they are paid not to. The financial interests grew so big that buying the silence is affordable and mandatory.

Feb 12, 2018
Using computer models for predicting climate change seems to be an accepted practice, how is this different?

Feb 13, 2018
Well nm, the difference I see is, that projecting financial trends are an attempt to second-guess the non-existent emotional stability of the gamblers running the Wall Street Casino.

Trying to predict complexities of weather patterns and Climate Change are even more difficult. As the sciences advance with the continuous evolution of technology, providing increasingly more reliable data.

However, I do not need to engage in the infantile self-delusions of the deniers of reality. I just step outside my door to experience the local effects of Climate Change.

As expected weather patterns shift in unpredictable ways.

In the last four months, out-of-season very dry heat waves interspersed with severe cold snaps and two violent, short-lived concentrated rainstorms.

Feb 13, 2018
@rrwillsj the article isn't about financial trends, it's about predicting employee behavior. You are using anecdotal evidence which doesn't mean anything in the grand scheme of planet climatology. The article specifically addresses small sample groups.

"One of the fundamental flaws of predictive algorithms is their reliance on "inductive reasoning". This is when we draw conclusions based on our knowledge of a small sample, and assume that those conclusions apply across the board."

Yes, we have much better global data in recent years, but historical data is fairly limited, especially at a global level. Do we have enough historical data to consider climate predictive models reliable based on the shortcomings mentioned in this article?

Feb 14, 2018
nm, you did not want to purchase insurance. You hate having to make regular payments for insurance.

Then, when it all goes wrong for you? Is the time to feel some relief that you had planned ahead, made some personal sacrifices to pay those premiums and are at least partly recompense for your losses.

The deniers of Global Climate Change look pretty silly with their heads in the sand and their asses sticking up with those fluffy feathers glued on.

Feb 14, 2018
@rrwillsj, I said nothing about denying climate change. I asked a legitimate question about computer climate model predictions so that I could gain a better understanding in relation to what this article espouses.

Rather than responding about the science behind it, you resorted to ad hominem attacks and name calling. If you don't have any actual knowledge to add to the discussion there is no need to comment.

I'm still hoping that someone can explain how climate computer models are different than business computer models and why one can be trusted but not the other.

Feb 15, 2018
The weather forecast is strictly a deterministic predictive activity. In the sense that given enough computing power we could calculate weather predictions with absolute certainty. We don't and we will probably never have such computing capabilities so weather forecast is based on approximative models, approximation leading to an under 100 % certainty of prediction. But still, the weather forecast is based on strictly scientific, deterministic laws of nature.
The financial correlations different models try to describe don't have this quality. The inflation rate and unemployment can't be linked thru a deterministic law. There is no scientifically validated law to describe their dependency.

Feb 15, 2018
Financial Markets Have Taken Over the Economy. To Prevent Another Crisis, They Must Be Brought to Heel.

By Servaas Storm

Feb 15, 2018
Now nattyboy, it ain't all about you! You are cowering behind the cult of victimization for no discernible reason I can see.

Ad hominem attacks and satirical ridicule are my schtick. If I wanted to mock you? I'd have specifically singled you out. Instead I included you in with badinage.

Yes, you should feel honored. As my witlessness is sharp and unsparing, even of myself.

I hope that bhulpac has explained the key difference you wanted. To your satisfaction. Cause I'm just too fucking lazy to bother.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more