Mining the language of science

November 18, 2011
Categorising textual information. Credit: iStockphoto/Enot Poluskun

(PhysOrg.com) -- Scientists are developing a computer that can read vast amounts of scientific literature, make connections between facts and develop hypotheses.

Ask any biomedical scientist whether they manage to keep on top of reading all of the publications in their field, let alone an adjacent field, and few will say yes. New publications are appearing at a double-exponential rate, as measured by MEDLINE – the US National Library of Medicine’s biomedical bibliographic database – which now lists over 19 million records and adds up to 4,000 new records daily.

For a prolific field such as cancer research, the number of publications could quickly become unmanageable and important hypothesis-generating evidence may be missed. But what if could instruct a computer to help them?

To be useful, a computer would need to trawl through the in the same way that a scientist would: reading the literature to uncover new knowledge, evaluating the quality of the information, looking for patterns and connections between facts, and then generating to test. Not only might such a program speed up the progress of scientific discovery but, with the capacity to consider vast numbers of factors, it might even discover information that could be missed by the human brain.

The aim of Dr. Anna Korhonen and researchers in the Natural Language and Information Processing Group in the University of Cambridge’s Computer Laboratory is to develop computers that can understand written language in the same way that humans do. One of the projects she is involved in has recently developed a method of ‘text mining’ one of the most literature-dependent areas of biomedicine: cancer risk assessment of chemicals.

Every year, thousands of new chemicals are developed, any one of which might pose a potential risk to human health. Complex risk assessment procedures are in place to determine the relationship between exposure and the likelihood of developing cancer, but it’s a lengthy process, as Royal Society University Research Fellow Dr Korhonen explained: “The first stage of any risk assessment is a literature review. It’s a major bottleneck. There could be tens of thousands of articles for a single chemical. Performed manually, it’s expensive and, because of the rising number of publications, it’s becoming too challenging to manage.”

CRAB, the tool her team has developed in collaboration with Professor Ulla Stenius’ group at the Institute of Environmental Medicine at Sweden’s Karolinska Institutet, is a novel approach to cancer risk assessment that could help risk assessors move beyond manual literature review.

The approach is based on text-mining technology, which has been pioneered by computer scientists, and involves developing programs that can analyse natural language texts, despite their complexity, inconsistency and ambiguity. The tool Dr. Korhonen has developed with her colleagues is the first text-mining tool aimed at aiding literature review in chemical risk assessment.

At the heart of CRAB, the development of which was funded by the Medical Research Council and the Swedish Research Council among others, is a taxonomy that specifies scientific evidence used in cancer risk assessment, including key events that may result in cancer formation. The system takes the textual content of each relevant MEDLINE abstract and classifies it according to the taxonomy. At the press of a button, a profile is rapidly built for any particular chemical using all of the available literature, describing highly specific patterns of connections between chemicals and toxicity.

“Although still under development, the system can be used to make connections that would be difficult to find, even if it had been possible to read all the documents,” added Dr. Korhonen. “In a recent experiment, we studied a group of chemicals with unknown mode of action and used the CRAB tool to suggest a new hypothesis that might explain their male-specific carcinogenicity in the pancreas.”

The tool will be available for end-users via an online web interface. However, research into improving text mining will continue. One of the biggest current challenges is to develop adaptive technology that can be ported easily between different text types, tasks and scientific fields.

One day, rather than being at the mercy of the flourishing rate of publication, scientists will have at their fingertips a system to work alongside them that will not only point them towards those references that are relevant to their search, but will also tell them why.

Explore further: Surgical skills with the click of a button

Related Stories

Plagiarism sleuths tackle full-text biomedical articles

October 25, 2010

In scientific publishing, how much reuse of text is too much? Researchers at the Virginia Bioinformatics Institute at Virginia Tech and collaborators have shown that a computer-based text-searching tool is capable of unearthing ...

Catching youth depression before it takes hold

November 2, 2011

Australian researchers will for the first time harness e-technology to develop a customised internet program to identify and arm healthy young people most at risk of depression before the disorder takes hold.

Online tool can help seniors quickly determine risk for dementia

January 14, 2011

(PhysOrg.com) -- A quick online assessment tool developed by Johns Hopkins researchers can help worried seniors find out if they are at risk of developing dementia and determine whether they should seek a comprehensive, face-to-face ...

Vitamin D levels, prostate cancer not linked

February 14, 2011

(PhysOrg.com) -- In a detailed review, funded by Cancer Research UK, scientists looked at all the available evidence and found there was no link between the amount of vitamin D in men’s blood and the risk of prostate ...

Computers provide connections for older adults

September 21, 2011

The rapid evolution of computers makes it challenging for computer savvy users to keep up, but what about older Americans? How useful are computers to the aging population? As the rate of technology change accelerates, there ...

Recommended for you

Engineers use replica to pinpoint California dam repairs

June 26, 2017

Inside a cavernous northern Utah warehouse, hydraulic engineers send water rushing down a replica of a dam built out of wood, concrete and steel—trying to pinpoint what repairs will work best at the tallest dam in the U.S ...

10 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Nerdyguy
1.2 / 5 (5) Nov 18, 2011
"Scientists are developing a computer that can read vast amounts of scientific literature, make connections between facts and develop hypotheses."

Wait. Stop there. Who gets to decide which are the "facts" upon which this system will build its ultimate conclusions? Setting aside for a moment such emotionally debated topics as climate change, what about the volume of just poorly-designed studies which are later shown to be invalid? Presumably, this would just be a faster method of dispensing bad science to the world. On the other hand, that would make it no different from the current system, except for the speed.
Jeddy_Mctedder
1 / 5 (3) Nov 18, 2011
combine this witg a generalist creative thinker like ken jennings---- let him learn to use this as a tool. the combo could be wildly powerful
Jotaf
not rated yet Nov 18, 2011
Nerdyguy: In natural language processing, a crucial issue is dealing with inconsistencies/noise (as pointed out in the article). You simply can't do any processing without it, because you can't model all the subtleties of language and human thinking.

The inconsistencies in the data are presumably treated in the same way as in the natural language (they're bundled together in the articles). So any outlier which doesn't agree with most other studies won't be taken into account, same way as a portion of writing that doesn't make sense to the system.

In the worst-case scenario that the majority of studies are wrong but in exactly the same way, the system can't do better than human scientists would. The general assumption is that repeatable experiments are correct, and you need a very, very good reason to dispute that.

I certainly wouldn't mind an automated companion to give me a general overview of a lot of papers at once!
Nerdyguy
2.3 / 5 (3) Nov 18, 2011
The inconsistencies in the data are presumably treated in the same way as in the natural language (they're bundled together in the articles). So any outlier which doesn't agree with most other studies won't be taken into account, same way as a portion of writing that doesn't make sense to the system.

In the worst-case scenario that the majority of studies are wrong but in exactly the same way, the system can't do better than human scientists would. The general assumption is that repeatable experiments are correct, and you need a very, very good reason to dispute that.

I certainly wouldn't mind an automated companion to give me a general overview of a lot of papers at once!


Makes sense to me. May still have the same biases as humans, but will be faster.
Sean_W
1 / 5 (4) Nov 18, 2011
What if it comes up with the hypothesis that scientists are bad at statistics or that meta-analysis is a crock or that the peer review process is in terrible need of reform? Will we be able to fire it?
rwinners
1 / 5 (3) Nov 19, 2011
Hey, a thinking computer. What a concept!
hush1
1 / 5 (2) Nov 19, 2011
...develop computers that can understand written language in the same way that humans do


An ambitious goal. Let me understand how humans understand written language first. Then use this knowledge to help humans understand written language. Then develop computers that can understand written language.

Of course text mining is orders of magnitude below this ambitious goal.
Seeker2
1 / 5 (2) Dec 08, 2011
Presumably, this would just be a faster method of dispensing bad science to the world.

Maybe we could dispense of bad science period.
Seeker2
1 / 5 (2) Dec 08, 2011
What if it comes up with the hypothesis that scientists are bad at statistics or that meta-analysis is a crock or that the peer review process is in terrible need of reform? Will we be able to fire it?

I think I can draw my own conclusions, if appropriate. Just give me the relevant facts. As for making hypotheses man that would be scary. Should be good for laughs though.
Seeker2
1 / 5 (2) Dec 08, 2011
Who gets to decide which are the "facts" upon which this system will build its ultimate conclusions?
A system should be able to gather some facts. Watch out for those ultimate conclusions.
Setting aside for a moment such emotionally debated topics as climate change, what about the volume of just poorly-designed studies which are later shown to be invalid?
Poorly designed studies should be identified before they can spread. Especially controversial issues like climate change.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.