Is reliable artificial intelligence possible?

March 15, 2017 by Hillary Sanctuary, Ecole Polytechnique Federale de Lausanne
Is Reliable Artificial Intelligence Possible?
Credit: ktsimage

n the quest for reliable artificial intelligence, EPFL scientist Marcel Salathé argues that AI technology should be openly available. He will be discussing the topic at this year's edition of South by South West on March 14th in Austin, Texas.

Will (AI) change the nature of work? For EPFL theoretical biologist Marcel Salathé, the answer is invariably yes. To him, a more fundamental question that needs to be addressed is who owns that artificial intelligence?

"We have to hold AI accountable, and the only way to do this is to verify it for biases and make sure there is no deliberate misinformation," says Salathé. "This is not possible if the AI is privatized."

AI is both the algorithm and the data

So what exactly is AI? It is generally regarded as "intelligence exhibited by machines". Today, it is highly task specific, specially designed to beat humans at strategic games like Chess and Go, or diagnose skin disease on par with doctors' skills.

On a practical level, AI is implemented through what scientists call "machine learning", which means using a computer to run specifically designed software that can be "trained", i.e. process data with the help of algorithms and to correctly identify certain features from that data set. Like human cognition, AI learns by trial and error. Unlike humans, however, AI can process and recall large quantities of data, giving it a tremendous advantage over us.

Crucial to AI learning, therefore, is the underlying data. For Salathé, AI is defined by both the algorithm and the data, and as such, both should be publicly available.

Deep learning algorithms can be perturbed

Last year, Salathé created an algorithm to recognize . With more than 50,000 photos of healthy and diseased plants in the database, the algorithm uses artificial intelligence to diagnose plant diseases with the help of your smartphone. As for human disease, a recent study by a Stanford Group on cancer showed that AI can be trained to recognize skin cancer slightly better than a group of doctors. The consequences are far-reaching: AI may one day diagnose our diseases instead of doctors. If so, will we really be able to trust its diagnosis?

These diagnostic tools use data sets of images to train and learn. But visual data sets can be perturbed that prevent from correctly classifying images. Deep neural networks are highly vulnerable to visual perturbations that are practically impossible to detect with the naked eye, yet causing the AI to misclassify images.

In future implementations of AI-assisted medical diagnostic tools, these perturbations pose a serious threat. More generally, the perturbations are real and may already be affecting the filtered information that reaches us every day. These vulnerabilities underscore the importance of certifying AI technology and monitoring its reliability.

Explore further: Artificial intelligence could help farmers diagnose crop diseases

Related Stories

Smartphones to battle crop disease

November 24, 2015

EPFL and Penn State University are releasing an unprecedented 50,000 open-access photos of plant diseases. The images will be used to build an app that will turn smartphones into plant doctors, helping growers around the ...

Recommended for you

Machine learning identifies links between world's oceans

March 21, 2019

Oceanographers studying the physics of the global ocean have long found themselves facing a conundrum: Fluid dynamical balances can vary greatly from point to point, rendering it difficult to make global generalizations.

How fluid viscosity affects earthquake intensity

March 21, 2019

Fault zones play a key role in shaping the deformation of the Earth's crust. All of these zones contain fluids, which heavily influence how earthquakes propagate. In an article published today in Nature Communications, Chiara ...

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

BrettC
not rated yet Apr 06, 2017
The Term A.I. is used way to often and is not clearly defined. I like the term Deep Learning better. A.I. means different things to different people. To me is it usually conjures an intelligence modeled on human intelligence which is inherently flawed and would not be reliable at all. This would make it unsuitable for most tasks requiring reliability.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.