Assessing scientific research by 'citation wake' detects Nobel laureates' papers

December 12, 2014 by Lisa Zyga report
The wake scores of all papers in the Physical Review citation base from 1892 to 2009. The dashed line shows the maximal wake size at a given publication date. The “ridge” formed by the data indicates cross-references between scientific subfields. Credit: © Klosik, Bornholdt (CC by 4.0)

(—Ranking scientific papers in order of importance is an inherently subjective task, yet that doesn't keep researchers from trying to develop quantitative assessments. In a new paper, scientists have proposed a new measure of assessment that is based on the "citation wake" of a paper, which encompasses the direct citations and weighted indirect citations received by the paper. The new method attempts to focus on the propagation of ideas rather than credit distribution, and succeeds by at least one significant measure: a large fraction (72%) of its top-ranked papers are coauthored by Nobel Prize laureates.

Ph.D. student David F. Klosik and Dr. Stefan Bornholdt at the University of Bremen have published their on the wake measure of publications in a recent issue of PLOS ONE.

As Klosik and Bornholdt explain, scientists' practice of citing the work that influenced them in the reference list of their own publications offers a wealth of data on the structure and progress of science. The difficulty lies in interpreting the data, which is often a controversial process.

The first paper citation network was developed in the 1960s, and early analysis was based almost exclusively on counting a paper's number of direct citations. This method has formed the basis of several newer quantitative methods of assessment, such as the h-index, which attempts to measure the impact of individual researchers, and the Thomson Scientific Journal Impact Factor, which ranks the relative of journals.

However, it's well-known that measures based on citation count have several shortcomings. For one thing, a paper's ranking strongly depends on the citation habits and size of the paper's field. Further, newer papers have fewer citations simply because they have not been around long enough to receive as many citations as older papers. On the other hand, the citation count may underestimate the impact of very old yet groundbreaking publications, since once seminal results become textbook knowledge, the original papers are often no longer cited.

More recently, newer methods (such as CiteRank, SARA, and Eigenfactor) have addressed some of these drawbacks by accounting for factors other than direct citations. While they have made improvements, these methods generally view the citation network primarily as one of credit diffusion.

Klosik and Bornholdt's new measure differs in that it views the citation network as a picture of idea propagation, in which the ideas within a paper influence future research far beyond the citations the paper receives directly.

The 10 top-ranked publications according to the wake citation score with dilution parameter chosen to be 0.9 (where 1.0 means the whole wake is considered). Nobel Prize laureates are labelled with an asterisk. The second column shows the fraction of the ranks assigned to the paper according to the number of direct citations and the wake citation score, respectively. Credit: © Klosik, Bornholdt (CC by 4.0)

"Our wake citation score is less sensitive to the size of the research community of a paper than other existing measures, as we do not focus on the direct citation count of a paper," Bornholdt told "What makes our wake citation score unique is our focus on whether a paper 'started something,' by estimating its 'word of mouth dynamics' from the subsequent citation network."

In their study, the researchers analyzed all papers in the Physical Review database, dating back more than a century. In their method, each paper receives a wake citation score. A paper's wake consists of all papers that that have cited it, either directly or indirectly. Since a paper can receive citations only from papers published at a later date, these papers form a "wake" of that paper as viewed on a graph.

All papers in a paper's wake are then assigned to neighborhood layers according to the length of the shortest path to the paper (similar to the concept of degrees of separation). In terms of idea , the shortest path can also be viewed as the minimal number of processing steps of an idea.

Finally, the paper's wake citation score is computed as a weighted sum of the total number of papers in each layer. A detrending factor accounts for the fact that, the earlier a paper is published, the more papers there are in the future that could potentially cite it. A dilution factor can also be applied to restrict the number of layers considered, from only direct citations to the full wake of citations.

The resulting wake citation scores yield a ranking of papers that is very different than a list of papers ranked by number of citations. As the results show, 9 out of the top 10 papers ranked by wake citation score are only moderately cited (the exception is the #1 ranked paper, "Theory of Superconductivity" by Bardeen, Cooper, and Schrieffer). The other papers show a very high ratio between the direct-citation-rank and the wake-citation-rank. For example, the paper ranked #2 according to wake citation score ("The Radiation Theories of Tomonaga, Schwinger, and Feynman" by F. Dyson) has a ratio of 707.5, indicating a direct-citation-rank of merely 1,415. Among the top 100 papers ranked by wake citation score, 86 show a ratio higher than 10.

As for which ranking method is "better," there is of course no objective measure of importance; otherwise, that would be the only measure needed. But considering the widely accepted scientific quality of Nobel Prize research, Klosik and Bornholdt have checked their top-ranked papers that have been coauthored by Nobel Prize laureates. They found that 18 of the top 25 and more than half of the top 100 papers have contributions from a Nobel Prize laureate. In contrast, the ranking by direct citation count yields Nobel author contributions in just 4 of the top 25 and 25 of the top 100 papers. (Overall, the ranking by direct citation in the Physical Review database is dominated by papers on density-functional theory.)

Besides comparing to the direct citation ranking, the researchers also compared the wake citation ranking to one of the more elaborate measures of rank, which is Google's PageRank algorithm. They found that the top papers according to PageRank contain more Nobel laureate coauthors than in the direct citation rank, but fewer than in the wake citation rank. One of the biggest differences between PageRank and wake citation is that PageRank counts weighted paths (the connections between papers) while wake citation counts weighted nodes (the papers themselves).

While the wake citation method currently applies only to papers, Klosik and Bornholdt plan to extend the measure to scientists in the future.

"We are currently exploring the wake citation score as an impact measure for scientists," Bornholdt said. "This could provide a more balanced ranking of scientists from different fields."

Explore further: International collaborations produce more influential science, analysis finds

More information: David F. Klosik and Stefan Bornholdt. "The Citation Wake of Publications Detects Nobel Laureates' Papers." PLOS ONE. DOI: 10.1371/journal.pone.0113184

Related Stories

How to Spot an Influential Paper Based on its Citations

July 4, 2009

( -- At first it may seem that the number of citations received by a published scientific paper is directly related to that paper's quality of content. The higher the quality, the more people read and cite that ...

Scientists who share data publicly receive more citations

October 1, 2013

A new study finds that papers with data shared in public gene expression archives received increased numbers of citations for at least five years. The large size of the study allowed the researchers to exclude confounding ...

Recommended for you

Averaging the wisdom of crowds

December 12, 2017

The best decisions are made on the basis of the average of various estimates, as confirmed by the research of Dennie van Dolder and Martijn van den Assem, scientists at VU Amsterdam. Using data from Holland Casino promotional ...

Genetics preserves traces of ancient resistance to Inca rule

December 12, 2017

The Chachapoyas region was conquered by the Inca Empire in the late 15th century. Knowledge of the fate of the local population has been based largely on Inca oral histories, written down only decades later after the Spanish ...


Adjust slider to filter visible comments by rank

Display comments: newest first

3 / 5 (2) Dec 12, 2014
...and how many of these references were for nonsense like polywater and piltdown man?
5 / 5 (7) Dec 12, 2014
Would be interesting to see if there is a noticeable 'bump' in the wake of Nobel Laureate papers after they received the Nobel Prize.

Overall I like the approach.
It may not be quite perfect (some fields probably have inherently longer wakes than others - e.g. zoology vs. computer science - due to the shere amount of papers published on any one subject and the longevity of the subjects themselves). If one could account for that the disparate fields could become compareable.
5 / 5 (3) Dec 12, 2014
The claim that "scientists' practice of citing the work that influenced them" is not completely true. You have cite earlier work when you write a paper, regardless of quality or influence of that paper. In many cases, the first few studies in an area are badly done. But later, well done studies all have to cite these studios simply because they were earlier. Bad papers can accumulate a lot of citations that way. One might say that this is also "influence", but really it is just influence about how not to do things.
5 / 5 (7) Dec 12, 2014
You have cite earlier work when you write a paper, regardless of quality or influence of that paper.

You don't just dump cites in a paper. You do a section about state of the art and the perceived need of your research - where you cite some papers you read. And you do a section on your methods - where you cite any methods that you used from other papers that you read.

In either case these are papers that influenced your work. Since you're working on a subject you gradually become an expert yourself - which helps you distinguish the good from the bad papers (and in the first phase - where you aren't an expert - your supervisor should make sure you're not embarassing him by cuiting low quality papers. His/her name is on the paper, too, after all.).
5 / 5 (1) Dec 12, 2014
"You do a section about state of the art and the perceived need of your research - where you cite some papers you read."

There's the catch. If the state of the art is a couple of bad papers, then you have to cite them. You can't say "there are no good papers on this". You have to cite and then maybe diplomatically mention limitations. In this way bad papers accumulate citations for years before they fall out of favor without having any real influence on anything.

5 / 5 (6) Dec 12, 2014
There's the catch. If the state of the art is a couple of bad papers, then you have to cite them.

Sure. But being bad papers they don't stay state of the art for their wake will be pretty short (always assuming that your paper sucks less than theirs).
...and working on removing limitations of previous works is a big part of research. So if you publish something that supercedes a bad paper then you will be cited.
5 / 5 (2) Dec 12, 2014
You don't just dump cites in a paper. You do a section about state of the art and the perceived need of your research

Weak papers and crank authors do. They try to stuff as many citations in on any excuse to have more "traction", make the topic seem more important, in the hopes that people reviewing the papers would be lazy and easily impressed.

5 / 5 (2) Dec 12, 2014
Paper says "impact" but also claims "innovation" as target property. Seems to me only "impact" is correct. There are too few metrics for science concepts. This one reaches significantly different results than direct citation, and seems to be a good alternative. Next, perhaps the quality of the journal and the 'competence/quality' of the citing authors can be weighed-in. Also metric of textbook citations (of the various sorts) would be useful; as the paper says, once a concept appears in textbooks (ie 'dogma'), it often is no longer cited (except for celebrity/bling purposes).
Impact and inovation (I&I) need context: scientific communty? to global GDP? to human health or happiness? And note that it could be the best metrics for I&I may be found in history or philosophy (and other social science) papers rather than science journals.
5 / 5 (3) Dec 13, 2014
Weak papers and crank authors do.

Weak papers and crank authors don't get past peer review (mostly). So their effect on such a wake - which is based on an assessment of peer reviewed papers - is negligible.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.