Scholarly snowball: Deep learning paper generates big online collaboration

April 4, 2018, Morgridge Institute for Research
Credit: CC0 Public Domain

Bioinformatics professors Anthony Gitter and Casey Greene set out in summer 2016 to write a paper about biomedical applications for deep learning, a hot new artificial intelligence field striving to mimic the neural networks of the human brain.

They completed the paper, but also triggered an intriguing case of academic crowdsourcing. Today, the paper has been massively written and revised with the help of more than 40 online collaborators, most of whom contributed enough to become co-authors.

The updated study, "Opportunities and obstacles for deep learning in biology and medicine," was published April 4, 2018 in the Journal of the Royal Society Interface.

Gitter, of the Morgridge Institute for Research and University of Wisconsin-Madison; and Greene, of the University of Pennsylvania; both work in the application of computational tools to solve big challenges in health and biology. They wanted to see where deep learning was making a difference and where the untapped potential lies in the biomedical world.

Gitter likened the process to how the community works.

"We are basically taking a software engineering approach to writing a scholarly paper," he says. "We're using the GitHub website as our primary writing platform, which is the most popular place online for people to collaborate on writing code."

Adds Gitter: "We also adopted the mentality of getting a big team of people to work together on one product, and coordinating what needs to be done next."

The new authors frequently provided examples of how deep learning is impacting their corner of science. For example, Gitter says one scientist contributed a section on cryo-electron microscopy, a new must-have tool for biology imaging, that is using deep learning techniques. Others rewrote portions to make it more accessible to non-biologists or provided ethical background on medical data privacy.

Deep learning is part of a broader family of machine learning tools that has made breakthrough gains in recent years. It uses the structure of to feed inputs into multiple layers to train the algorithm. It can build ways to identify and describe recurring features in data, while also being able to predict some outputs. Deep learning also can work in "unsupervised" mode, where it can explain or identify interesting patterns in data without being directed.

One famous example of unsupervised deep learning is when a Google-produced neural network identified that the three most important components of online videos were faces, pedestrians and cats—without being told to look for them.

Deep learning has transformed programs like face recognition, speech patterns and language translation. Among the scores of clever applications is a program that learns the signature artistic traits of famous painters, and then transforms everyday pictures into a Van Gogh, Picasso or Monet.

Greene says deep learning has not yet revealed the "hidden cats" in healthcare data, but there are some promising developments. Several studies are using deep learning to better categorize breast cancer patients by disease subtype and most beneficial treatment option. Another program is training on huge natural image databases to be able to diagnose diabetic retinopathy and melanoma. These applications surpassed some of the state of the art tools.

Deep learning also is contributing to better clinical decision-making, improving the success rates of clinical trials, and tools that can better predict the toxicity of new drug candidates.

"Deep learning tries to integrate things and make predictions about who might be at risk to develop certain diseases, and how we can try to circumvent them early on," Gitter says. "We could identify who needs more screening or testing. We could do this in a preventative, forward thinking manner. That's where my co-authors and I are excited. We feel like the potential payoff is so great, even if the current technology cannot meet these lofty goals."

Explore further: New technology makes artificial intelligence more private and portable

More information: Opportunities and obstacles for deep learning in biology and medicine, Journal of the Royal Society Interface (2018). rsif.royalsocietypublishing.or … .1098/rsif.2017.0387

Related Stories

Neurons have the right shape for deep learning

December 4, 2017

Deep learning has brought about machines that can 'see' the world more like humans can, and recognize language. And while deep learning was inspired by the human brain, the question remains: Does the brain actually learn ...

Tensor2Tensor library to speed deep learning work

June 21, 2017

(Tech Xplore)—Google Brain Team's senior research scientist Lukasz Kaiser had an announcement on Monday, posted on the Google Research Blog, that translates into good news for those engaged in Deep Learning research.

Deep learning reconstructs holograms

October 16, 2017

Deep learning has been experiencing a true renaissance especially over the last decade, and it uses multi-layered artificial neural networks for automated analysis of data. Deep learning is one of the most exciting forms ...

Recommended for you

EU set to fine Google billions over Android: sources

July 17, 2018

The EU is set to fine US internet giant Google several billion euros this week for freezing out rivals of its Android mobile phone system, sources said, in a ruling that risks fresh tensions with Washington.

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.