This man was fired by a computer – real AI could have saved him

This man was fired by a computer – real AI could have saved him
Credit: Shutterstock

Ibrahim Diallo was allegedly fired by a machine. Recent news reports relayed the escalating frustration he felt as his security pass stopped working, his computer system login was disabled, and finally he was frogmarched from the building by security personnel. His managers were unable to offer an explanation, and powerless to overrule the system.

Some might think this was a taste of things to come as artificial intelligence is given more power over our lives. Personally, I drew the opposite conclusion. Diallo was sacked because a previous manager hadn't renewed his contract on the new computer system and various automated systems then clicked into action. The problems were not caused by AI, but by its absence.

The systems displayed no knowledge-based intelligence, meaning they didn't have a model designed to encapsulate knowledge (such as human resources expertise) in the form of rules, text and logical links. Equally, the systems showed no computational intelligence – the ability to learn from datasets – such as recognising the factors that might lead to dismissal. In fact, it seems that Diallo was fired as a result of an old-fashioned and poorly designed system triggered by a . AI is certainly not to blame – and it may be the solution.

The conclusion I would draw from this experience is that some human resources functions are ripe for automation by AI, especially as, in this case, dumb automation has shown itself to be so inflexible and ineffective. Most large organisations will have a personnel handbook that can be coded up as an automated, expert system with explicit rules and models. Many companies have created such systems in a range of domains that involve specialist knowledge, not just in human resources.

But a more practical AI system could use a mix of techniques to make it smarter. The way the rules should be applied to the nuances of real situations might be learned from the company's HR records, in the same way common law legal systems like England's use precedents set by previous cases. The system could revise its reasoning as more evidence became available in any given case using what's known as "Bayesian updating". An AI concept called "fuzzy logic" could interpret situations that aren't black and white, applying evidence and conclusions in varying degrees to avoid the kind of stark decision-making that led to Diallo's dismissal.

The need for several approaches is sometimes overlooked in the current wave of overenthusiasm for "deep learning" algorithms, complex artificial neural networks inspired by the human brain that can recognise patterns in large datasets. As that is all they can do, some experts are now arguing for a more balanced approach. Deep learning algorithms are great at pattern recognition, but they certainly do not show deep understanding.

Using AI in this way would likely reduce errors and, when they did occur, the system could develop and share the lessons with corresponding AI in other companies so that similar mistakes are avoided in the future. That is something that can't be said for human solutions. A good human manager will learn from his or her mistakes, but the next manager is likely to repeat the same errors.

So what are the downsides? One of the most striking aspects of Diallo's experience is the lack of humanity shown. A decision was made, albeit in error, but not communicated or explained. An AI may make fewer mistakes, but would it be any better at communicating its decisions? I think the answer is probably not.

Losing your job and livelihood is a stressful and emotional moment for anyone but the most frivolous employees. It is a moment when sensitivity and understanding are required. So, I for one would certainly find human contact essential, no matter how convincing the AI chatbot.

A sacked employee may feel that they have been wronged and may wish to challenge the decision through a tribunal. That situation raises the question of who was responsible for the original decision and who will defend it in law. Now is surely the moment to address the legal and ethical questions posed by the rise of AI, while it is still in its infancy.


Explore further

Deep learning comes full circle

Provided by The Conversation

This article was originally published on The Conversation. Read the original article.The Conversation

Citation: This man was fired by a computer – real AI could have saved him (2018, July 3) retrieved 19 July 2019 from https://phys.org/news/2018-07-real-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
18 shares

Feedback to editors

User comments

Jul 03, 2018
"Allegedly" is the key word here. I'd bet some executive cancelled a spending/development program, and didn't have the courtesy, decency, and honesty to tell Diallo they were firing him. Hopgood might also be right, that a manager simply didn't renew the employment contract.

Jul 03, 2018
Fuzzy logic isn't all that fuzzy. It's more of a weighted average.

For example:
https://upload.wi...e_en.svg

When you set up a fuzzy "function", you still need to put a knife-edge somewhere so your output transitions from "cold" to "warm" to "hot". You can make any number of intermediate states, but it still reduces down to discrete steps for the actual output. Of course you can define some range of values as a "maybe" state and do whatever you want with it, but the transition to those states is still a discrete jump.

In the context of employees: fire or don't fire. If one of your variables is 0.001 units lower, you fire, and if higher they get to keep their jobs. There's ultimately no ambiguity and no "maybe" states because the output is "do" or "don't".

Jul 03, 2018
https://en.wikipe...fication

Of course you can get a continuous function out of your fuzzy logic, which says something else like "48% support for firing worker" but then you need additional logic to deal with that. What limit do you set? 50%?

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more