How science can beat the flawed metric that rules it

Jul 30, 2014 by Nikolaus Kriegeskorte
Normal day for a scientist at work.

In order to improve something, we need to be able to measure its quality. This is true in public policy, in commercial industries, and also in science. Like other fields, science has a growing need for quantitative evaluation of its products: scientific studies. However, the dominant metric used for this purpose is widely considered to be flawed. It is the journal impact factor.

The impact factor is a measure of how many times recent papers from a particular scientific journal are cited in other scientific papers. Journals with a high impact factor enjoy prestige. Scientists compete to publish their work there, because this boosts their reputation and funding opportunities. In order to be published in such journals, a paper needs to pass prepublication peer review, a process in which two to four anonymous scientists evaluate its quality.

The impact factor creates a powerful social reality, in which a paper published in a journal with higher impact factor is a better paper and a scientist publishing in such journals is a better scientist.

Because the impact factor is based on the number of times all recent papers in a journal are cited, it is widely understood to provide a poor indication of the quality of each individual paper appearing in that journal. It is not just scientists, but also many journal editors and publishers who object to this metric. We have come to a point where the impact factor is almost universally rejected, and embracing it would pose a bit of a risk to your status in academic circles.

Throwing away a bad map

In the San Francisco Declaration on Research Assessment (DORA), editors, publishers, and scientists recommend against the use of journal-based metrics, such as the impact factor, as indicators of the quality of individual papers. Some of the signatories recently reported on DORA in The Conversation. Scientists, such as open-access pioneer, Michael Eisen, recent Nobel laureate Randy Schekman and science blogger Dorothy Bishop have similarly been calling for impact factors to be ignored when the quality of research is assessed.

At the same time, however, almost every scientist relies on the metric (or the prestige it confers to a journal) when selecting what to read. "How do you choose what to read?" is one of the more embarrassing questions to drop on a scientist.

Despite its flaws, scientists will rely on the impact factor as long as they have no better indication of the reliability and importance of new scientific papers. When deciding which of two new papers to read (assuming they are equally relevant, and we don't know the authors), most of us will prefer the one that appeared in the journal with the higher impact factor. Assessments of the overall scientific contribution of a scientist or department, similarly, rarely ignore this metric.

It is unrealistic to suggest that a committee deciding who to hire or fund should replicate the assessment of individual papers already performed by peer review. Such committees are typically under considerable time pressure. If they are to make a good decision, they will need to use all available evidence to estimate the quality of the work. The impact factor is unreliable. However, direct assessment of the applicant's work will similarly be compromised by the limits of the committee's time and expertise.

Given a choice between a bad map and no map at all, a rational person will choose the bad map. Asking people to ignore the only indication to the quality of recent scientific papers we currently have in favour of "judging by the content" is like saying that we shouldn't choose what books to read without even having read them.

How to beat the impact factor

The only way to beat the impact factor is to provide a better evaluation signal for new scientific papers.

When a paper is published, it is read and judged in private by experts in the field who work on related questions. All we need to do to beat the impact factor is sample those expert judgements and combine them into numerical evaluations that reflect peer opinion on the reliability and importance of individual scientific papers. Such a process of open evaluation would provide ratings that are specific to each paper and combine a larger number of expert opinions than traditional peer review can. The process could also benefit from post-publication commentary.

An open evaluation system will need to be more complex than Facebook likes or product ratings on Amazon. We will need multiple rating scales, at least two: for reliability and importance. We will also need to enable scientists to sign their ratings with digital authentication. Signed judgements will be essential to ensure that the system is trustworthy and transparent. An average of even just a dozen signed ratings by renowned experts would almost certainly provide a better evaluation signal and could free us from our dependence on the .

Time for change

Scientific publishing is currently in a state of flux. Recent developments point in the right direction, although they do not go far enough. These include: Pubmed Commons, PLoS Open Evaluation, Altmetric, and a large number of new start-up companies, such as PubPeer, ScienceOpen, the Winnower, and many others. Eventually, we might want to consolidate the open evaluation process into a single system, which should ideally be publicly funded and entirely transparent.

The evaluation of scientific papers steers the direction of each field of science, and – beyond science – guides real-world applications and public policy. If papers had reliable ratings, science would progress with a surer step. Only findings found to be reliable and important by a broad peer evaluation process would be widely publicised, thus improving the impact of science on society.

The perceived importance of a scientific paper should reflect the deepest wisdom of the scientific community, rather than the judgements of three anonymous peer reviewers. It is time scientists took charge of the evaluation process. Open evaluation will mean a fundamental change of the culture of science toward openness, transparency, and constructive criticism. We are slowly realising that the rules of the game are ultimately up to us, and taking on the creative challenge to change them.

Explore further: What lesson do rising retraction rates hold for peer review?

add to favorites email to friend print save as pdf

Related Stories

Nobel winning scientist to boycott top science journals

Dec 10, 2013

(Phys.org) —Randy Schekman winner (with colleagues) of the Nobel Prize this year in the Physiology or Medicine category for his work that involved describing how materials are carried to different parts ...

Recommended for you

James Watson's Nobel Prize to be auctioned

Nov 25, 2014

Missed the chance to bid on Francis Crick's Nobel Prize when it was auctioned off last year for $2.27 million? No worries, you'll have another chance to own a piece of science history on Dec. 4, when James D. Watson's 1962 ...

Engineers develop gift guide for parents

Nov 21, 2014

Faculty and staff in Purdue University's College of Engineering have come up with a holiday gift guide that can help engage children in engineering concepts.

Former Brown dean whose group won Nobel Prize dies

Nov 20, 2014

David Greer, a doctor who co-founded a group that won the 1985 Nobel Peace Prize for working to prevent nuclear war and who helped transform the medical school at Brown University, has died. He was 89.

User comments : 12

Adjust slider to filter visible comments by rank

Display comments: newest first

Doug_Huffman
1 / 5 (4) Jul 30, 2014
[W]e might want to consolidate the open evaluation process into a single system, which should ideally be publicly funded and entirely transparent.
LOL "publicly funded"? Means like the internet was built? Now being taken over by the NWO-OWG using tax-funding taken at the point of a gun.

Individual grassroots efforts are inefficient and anti-fragile against corporatist and government interference.
Doug_Huffman
1 / 5 (3) Jul 30, 2014
Another observation; here we have da' convo blogging about 'impact' - what incredible recherché self-reference as it is "provided" to a news accumulator. See the meme; buzz, twitter, convo? Andy Warhol was prescient and his minutes have been inflated to seconds and microseconds and his program notes into 142 ASCII characters.
antialias_physorg
4.2 / 5 (5) Jul 30, 2014
I agree that having only a few peer reviewers is not optimal, but you have to consider some very hard realities:
- papers contain new research. No one else in the world has done that research. The number of people who work in a so closely related field to be able to judge this (and whose opinion/judgement would MEAN anything) are very few. The chance for irrelevant reviews would be large
- time constraint. If you wait for hundreds of people to read it/comment on it before you decide whether to read it yourself then you'll be here all decade.
- I can't see researchers having time to comment on every article they read
- Journal papers are peer reviewed anonymized for a good reason. Open review is not. The problems with this approach should be obvious.

In the end the proposed system still leads to scientists reading journal papers over others - albeit with a, possibly, better grading system that might help when filling posts (or it might not: with 'ballot stuffing').
Pexeso
2.3 / 5 (3) Jul 30, 2014
The impact factor was originally designed for librarians, as it should help them in choice of better journal for BUYING. At the moment, when the scientists adopted it as a criterion for PUBLISHING or even CITATIONS, they just turned its original meaning on its head. The positive feedback in application of criterion leads into its gradual instability and divergence.
KBK
3.7 / 5 (3) Jul 30, 2014
The more important thing is that access and control is being and has been sewed up.

The ownership of scientific journals and the access to the articles therein, is controlled by a few companies.

The question is, is this deliberate and well considered.... is using the story of it being a purely capitalistic concern -- is that a cover story for something entirely deeper?

Basically, the "public" (person in the street) is denied access to all these amazing research articles, articles that are hidden away from the very people who are the ones who bring about the REAL changes in the system, via left field discoveries and work. Work coming from those who might simply find/seek the journal reports - and think of something entirely new.

Discovery and human invention, creativity, etc..is blocked via heinous controls and paywalls. (is it merely the cover story? -investigate ownership...)

We cannot have insular science, built like secretive church dogma, and call it an open functional world.
Doug_Huffman
1 / 5 (1) Jul 30, 2014
Paywalls are the commodification/commoditization of the commentariat.
aturc
not rated yet Jul 30, 2014
https://sciencescape.org/

Is doing something along these lines but only with the health sciences.
antialias_physorg
4.6 / 5 (7) Jul 30, 2014
Basically, the "public" (person in the street) is denied access to all these amazing research articles

You are 'denied access' from scientific articles as much as you are denied access to a bag of potato chips.
Go buy the articles if you want to. You can buy them individually from the publishers at their websites. The 'blocking' isn't done by science but by those who make money off of publishing journals (in the exact same way that other companies make money off of publishing newspapers, magazines, comic books, etc.). So go ask them why they aren't altruist in a capitalist system.

If you want the articles for free go ask the authors (nicely). Or just move one friggin finger and go to arxiv - where you'll find most anything you want to read about even before it hits journals.

You're really griping about a non-problem.
tadchem
5 / 5 (1) Jul 30, 2014
'Metrics' are here to stay.
The world is large and complex enough that inevitably important decisions must be made by people with no direct understanding of a situation and no vested interest in the outcomes of their decisions. Metrics are created for these people, because no matter how ignorant they are with respect to the subject matter, they understand basic concepts of arithmetic.
Metrics give them numbers with which they can distinguish, rank, and sort a multitude of objects they must deal with in their ignorance.
Real science builds on its own past successes, while science 'failures' are doomed by future experiments that do not support them.
Empirical validation/falsification is the REAL metric that rules science.
antialias_physorg
5 / 5 (3) Jul 30, 2014
The metric is used for other things (not among scientists to see who's got the bigges science-tonker. Go ask any scientists what their impact factor is. Unless they are actively trying to get tenure at this very moment the answer will be "I have no idea". Guaranteed.)

Such a metric is used by committees to fill empty posts. How else are they going to decide whether someone is a capable researcher (and also, most importantly, capable of communicating his research) than by using some metric that reflects past work in some way? Such committees are beholden to be impartial - so they have to justify their decision in post as objective (in front of the other applicants and any patrons of the institution)
Obviously the position is vacant - so there's no in-house expert to judge solely based on the research .

...and research alone not a good professor does make!
Arties
Jul 30, 2014
This comment has been removed by a moderator.
Squirrel
1 / 5 (1) Jul 30, 2014
Scientists don't read articles--they inverse pyramid them. They scan thousands of titles, then subset hundreds of abstracts, of which a further subset have their introductions and conclusions read. Only a few are read in whole. Right at the bottom of the verse pyramid, authors are contacted and occasionally turn into collaborators.
Arties
Jul 31, 2014
This comment has been removed by a moderator.
Arties
Jul 31, 2014
This comment has been removed by a moderator.
Arties
Jul 31, 2014
This comment has been removed by a moderator.
Arties
Jul 31, 2014
This comment has been removed by a moderator.
Arties
Jul 31, 2014
This comment has been removed by a moderator.
Arties
Jul 31, 2014
This comment has been removed by a moderator.
antialias_physorg
5 / 5 (2) Jul 31, 2014
the practically important research is still paywalled

Then pay the paywall. What's your problem? If you want something you pay for it. That's how it goes - always. And since science isn't solely funded by taxpayer money you don't even have that argument to tide you over.

Regarding my previous comment: for example from over 4.000 articles about cold fusion only one or two appeared at ArXiv:

Why would you expect snake-oil blurbs to be featured in a science archive? That's like saying: In all of National Geographic history there are only two articles on porn stars. Low, but not surprising.

Aren't just the proponents of science these, who should ignore the voting system here in largest extent?

They not only shpould: they do. No scientist will look up the impact rating of another scientist before conversing. None.

You should stop fantasizing and actually looking at reality or start talking to scientists. Your fantasy world doesn't exist.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.