Why artificial intelligence has not yet revolutionised healthcare

December 8, 2016 by Olivier Salvado, The Conversation
Scans are still largely studied by humans. Credit: Shutterstock/bikeriderlondon

Artificial intelligence and machine learning are predicted to be part of the next industrial revolution and could help business and industry save billions of dollars by the next decade.

The tech giants Google, Facebook, Apple, IBM and others are applying artificial intelligence to all sorts of data.

Machine learning methods are being used in areas such as translating language almost in real time, and even to identify images of cats on the internet.

So why haven't we seen artificial intelligence used to the same extent in healthcare?

Radiologists still rely on visual inspection of magnetic resonance imaging (MRI) or X-ray scans – although IBM and others are working on this issue – and doctors have no access to AI for guiding and supporting their diagnoses.

The challenges for machine learning

Machine learning technologies have been around for decades, and a relatively recent technique called deep learning keeps pushing the limit of what machines can do. Deep learning networks comprise neuron-like units into hierarchical layers, which can recognise patterns in data.

This is done by iteratively presenting data along with the correct answer to the network until its internal parameters, the weights linking the artificial neurons, are optimised. If the training data capture the variability of the real-world, the network is able to generalise well and provide the correct answer when presented with unseen data.

So the learning stage requires very large of cases along with the corresponding answers. Millions of records, and billions of computations are needed to update the network parameters, often done on a supercomputer for days or weeks.

Here lies the problems with healthcare: data sets are not yet big enough and the correct answers to be learned are often ambiguous or even unknown.

We're going to need better and bigger data sets

The functions of the human body, its anatomy and variability, are very complex. The complexity is even greater because diseases are often triggered or modulated by genetic background, which is unique to each individual and so hard to be trained on.

Adding to this, specific challenges to exist. These include the difficulty to measure precisely and accurately any biological processes introducing unwanted variations.

Other challenges include the presence of multiple diseases (co-morbidity) in a patient, which can often confound predictions. Lifestyle and environmental factors also play important roles but are seldom available.

The result is that medical data sets need to be extremely large.

This is being addressed across the world with increasingly large research initiatives. Examples include Biobank in the United Kingdom, which aims to scan 100,000 participants.

Others include the Alzheimer's Disease Neuroimaging Initiative (ADNI) in the United States and the Australian Imaging, Biomarkers and Lifestyle Study of Ageing (AIBL), tracking more than a thousand subjects over a decade.

Government initiatives are also emerging such as the American Cancer Moonshot program. The aim is to "build a national cancer data ecosystem" so researchers, clinicians and patients can contribute data with the aim to "facilitate efficient data analysis". Similarly, the Australian Genomics Health Alliance aims at pooling and sharing genomic information.

Eventually the electronic medical record systems that are being deployed across the world should provide extensive high quality data sets. Beyond the expected gain in efficiency, the potential to mine population wide clinical data using is tremendous. Some companies such as Google are eagerly trying to access those data.

Automated image analysis using machine learning technologies can quantify automatically a 3D positron emission tomography (PET) scan (left), into a quantitative reporting display (right) that doctors can consult when diagnosing a patient. Credit: CSIRO

What a machine needs to learn is not obvious

Complex medical decisions are often made by a team of specialists reaching consensus rather than certainty.

Radiologists might disagree slightly when interpreting a scan where blurring and only very subtle features can be observed. Inferring a diagnosis from measurement with errors and when the disease is modulated by unknown genes often relies on implicit know-how and experience rather than explicit facts.

Sometimes the true answer cannot be obtained at all. For example, measuring the size of a structure from a brain MRI cannot be validated, even at autopsy, since post-mortem tissues change in their composition and size after death.

So a machine can learn that a photo contains a cat because users have labelled with certainty thousands of pictures through social media platforms, or told Google how to recognise doodles.

It is a much more difficult task to measure the size of a brain structure from an MRI because no one knows the answer and only consensus from several experts can be assembled at best, and at a great cost.

Several technologies are emerging to address this issue. Complex mathematical models including probabilities such as Bayesian approaches can learn under uncertainty.

Unsupervised methods can recognise patterns in data without the need for what the actual answers are, albeit with challenging interpretation of the results.

Another approach is transfer learning, whereby a machine can learn from large, different, but relevant, data sets for which the training answers are known.

Medical applications of have already been very successful. They often come first at competitions during scientific meetings where data sets are made available and the evaluation of submitted results revealed during the conference.

At CSIRO we have been developing CapAIBL (Computational Analysis of PET from AIBL) to analyse 3-D images of brain positron emission tomography (PET).

Using a database with many scans from healthy individuals and patients with Alzheimer's disease, the method is able to learn pattern characteristics of the disease. It can then identify that signature from unseen individual's scan. The clinical report generated allows doctors to diagnose the disease faster and with more confidence.

In the case (above), CapAIBL technology was applied to amyloid plaque imaging in a patient with Alzheimer's disease. Red indicates higher amyloid deposition in the brain, a sign of Alzheimer's.

The problem with causation

Probably the most challenging issue is about understanding causation. Analysing retrospective data is prone to learning spurious correlation and missing the underlying cause for diseases or effect of treatments.

Traditionally, randomised clinical trials provide evidence on the superiority of different options, but they don't benefit yet from the potential of .

New designs such as platform clinical trials might address this in the future, and could pave the way of how machine learning technologies could learn evidence rather than just association.

So large medical data sets are being assembled. New technologies to overcome the lack of certainty are being developed. Novel ways to establish causation are emerging.

This area is moving fast and tremendous potential exists for improving efficiency and health. Indeed many ventures are trying to capitalise on this.

Startups such as Enlitic, large firms such as IBM, or even small businesses such as Resonance Health, are promising to revolutionise health.

Impressive progress is being made but many challenges still exist.

Explore further: Making better use of the crowd

Related Stories

Making better use of the crowd

December 2, 2016

Over the last decade, computer scientists have harnessed crowds of Internet users to solve tasks that are notoriously difficult to crack with computers alone, such as determining whether an image contains a tree, rating the ...

How quantum effects could improve artificial intelligence

October 17, 2016

(Phys.org)—Over the past few decades, quantum effects have greatly improved many areas of information science, including computing, cryptography, and secure communication. More recently, research has suggested that quantum ...

How machine learning advances artificial intelligence

November 18, 2016

Computers that learn for themselves are with us now. As they become more common in 'high-stakes' applications like robotic surgery, terrorism detection and driverless cars, researchers ask what can be done to make sure we ...

Recommended for you

Cryptocurrency rivals snap at Bitcoin's heels

January 14, 2018

Bitcoin may be the most famous cryptocurrency but, despite a dizzying rise, it's not the most lucrative one and far from alone in a universe that counts 1,400 rivals, and counting.

Top takeaways from Consumers Electronics Show

January 13, 2018

The 2018 Consumer Electronics Show, which concluded Friday in Las Vegas, drew some 4,000 exhibitors from dozens of countries and more than 170,000 attendees, showcased some of the latest from the technology world.

Finnish firm detects new Intel security flaw

January 12, 2018

A new security flaw has been found in Intel hardware which could enable hackers to access corporate laptops remotely, Finnish cybersecurity specialist F-Secure said on Friday.


Adjust slider to filter visible comments by rank

Display comments: newest first

5 / 5 (1) Dec 16, 2016
data sets are not yet big enough

This is no longer true (it's a statement that may have been true 2-3 years ago). Today there are quite a few big medical datasets around for machine learning to be trained to very high levels of accuracy - and there are several additional big collection efforts en route.

The problem is not that machine learning performs badly or that large datasets are missing. The problem, to date, has been that it's a legal nightmare to get these into medical applications.

You can show, through studies, that machine learning can outperform doctors. But you cannot show WHY a machine learning algorithm makes a particular decision.

It's the same problem as with mind reading: You can't read a doctor's mind when he makes a decision, but a doctor can explain to you why he makes a decision based on simple, logical steps.
A machine learning algorithm can't tell you - in a sensibly/reductionist way - why it made a decision.
1 / 5 (3) Dec 16, 2016
The robot does not make decisions on its own, it is programed to do them that way. And you can find the place in the program where it makes it do it.

I think the complexity is daunting but the lawyers are the stoppers.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.