Intel collaborates with Novartis on the use of deep neural networks (DNN) to accelerate high content screening – a key element of early drug discovery. The collaboration team cut time to train image analysis models from 11 hours to 31 minutes – an improvement of greater than 20 times.

High content screening of cellular phenotypes is a fundamental tool supporting early drug discovery. The term "high content" signifies the rich set of thousands of predefined features (such as size, shape, texture) that are extracted from images using classical image-processing techniques. High content screening allows analysis of microscopic images to study the effects of thousands of genetic or chemical treatments on different cell cultures.

The promise of deep learning is that relevant image features that can distinguish one treatment from another are "automatically" learned from the data. By applying acceleration, biologists and data scientists at Intel and Novartis hope to speed up the analysis of high content imaging screens. In this joint work, the team is focusing on whole as opposed to using a separate process to identify each cell in an image first. Whole microscopy images can be much larger than those typically found in deep learning datasets. For example, the images used in this evaluation are more than 26 times larger than images typically used from the well-known ImageNet of animals, objects and scenes.

Deep convolutional neural network models, for analyzing microscopy images, typically work on millions of pixels per image, millions of parameters in the model and possibly thousands of training images at a time. That constitutes a high computational load. Even with advanced computational capabilities on existing computing infrastructure, deeper exploration of DNN models can be prohibitive in terms of time.

To solve these challenges, the collaboration is applying deep neural network acceleration techniques to process multiple images in significantly less time while extracting greater insight from image features that the model ultimately learns.

The collaboration team with representatives from Novartis and Intel have shown more than 20 times1 improvement in the time to process a dataset of 10K images for training. Using the Broad Bioimage Benchmark Collection 021 (BBBC-021) dataset, the team has achieved a total processing time of 31 minutes with over 99 percent accuracy.

For this result, the team used eight CPU-based servers, a high-speed fabric interconnect, and optimized TensorFlow1. By exploiting the fundamental principle of data parallelism in deep learning training and the ability to fully utilize the benefits of large memory support on the server platform, the team was able to scale to more than 120 3.9-megapixel images per second with 32 TensorFlow workers.

While supervised deep learning methods are essential to accelerating image classification and speeding time to insight, deep learning methods depend on large expert-labeled datasets to train the models. The time and manual effort necessary to create such datasets is often prohibitive. Unsupervised methods – that may be applied to unlabeled microscopy images – hold the promise of revealing novel insights for cellular biology and ultimately drug discovery. This will be the focus of continuing efforts in the future.

More information: Vebjorn Ljosa et al. Annotated high-throughput microscopy image sets for validation, Nature Methods (2012). DOI: 10.1038/nmeth.2083

Olga Russakovsky et al. ImageNet Large Scale Visual Recognition Challenge, International Journal of Computer Vision (2015). DOI: 10.1007/s11263-015-0816-y

TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arxiv.org/abs/1603.04467

Journal information: Nature Methods

Provided by Intel