Time to discard the metric that decides how science is rated

Jun 11, 2014 by David Drubin
This has been replaced by one number, sadly. Credit: cifor, CC BY-NC-ND

Scientists, like other professionals, need ways to evaluate themselves and their colleagues. These evaluations are necessary for better everyday management: hiring, promotions, awarding grants and so on. One evaluation metric has dominated these decisions, and that is doing more harm than good.

This metric, called the journal impact factor or just , and released annually, counts the average number of times a particular journal's articles are cited by other scientists in subsequent publications over a certain period of time. The upshot is that it creates a hierarchy among journals, and scientists vie to get their research published in a journal with a higher impact factor, in the hope of advancing their careers.

The trouble is that impact factor of journals where researchers publish their work is a poor surrogate to measure an individual researcher's accomplishments. Because the range of citations to articles in a journal is so large, the impact factor of a journal is not really a good predictor of the number of citations to any individual article. The flaws in this metric have been acknowledged widely – it lacks transparency and, most of all, it has unintended effects on how science gets done.

A recent study that attempted to quantify the extent to which publication in high-impact-factor journals correlates with academic career progression highlights just how embedded the impact factor is. While other variables also correlate with the likelihood of getting to the top of the academic ladder, the study shows that impact factors and academic pedigree are rewarded over and above the quality of publications. The study also finds evidence of gender bias against women in career progression and emphasises the urgent need for reform in .

Judging scientists by their ability to publish in the journals with the highest impact factors means that scientists waste valuable time and are encouraged to hype up their work, or worse, only in an effort to secure a space in these prized journals. They also get no credit for sharing data, software and resources, which are vital to progress in science.

This is why, since its release a year ago, more than 10,000 individuals across the scholarly community have signed the San Francisco Declaration on Research Assessment (DORA), which aims to free science from the obsession with the impact factor. The hope is to promote the use of alternative and better methods of research assessment, which will benefit not just the scientific community but society as a whole.

The DORA signatories originate from across the world, and represent just about all constituencies that have a stake in science's complex ecosystem – including funders, research institutions, publishers, policymakers, professional organisations, technologists and, of course, individual researchers. DORA is an attempt to turn these expressions of criticism into real reform of research assessment, so that hiring, promotion and funding decisions are conducted rigorously and based on scientific judgements.

We can also take heart from real progress in several areas. One of the most influential organisations that is making positive steps towards improved assessment practices is the US National Institutes of Health. The specific changes that have come into play at the NIH concern the format of the CV or "biosketch" in grant applications. To discourage the grant reviewers focusing on the journal in which previous research was published, NIH decided to help reviewers by inserting a short section into the biosketch where the applicant concisely describes their most significant scientific accomplishments.

At the other end of the spectrum, it is just as important to find individuals who are adopting new tools and approaches in how they show their own contributions to science. One such example is Steven Pettifer, a computer scientist at University of Manchester, who gathers metrics and indicators, combining citations in scholarly journals with coverage in social media about his individual articles to provide a richer picture of the reach and influence of his work.

Another example, as reported in the journal Science, comes from one of the DORA authors, Sandra Schmid at the University of Texas Southwestern Medical Center. She conducted a search for new faculty positions in the department that she leads by asking applicants to submit responses to a set of questions about their key contributions at the different stages in their career, rather than submitting a traditional CV with a list of publications. A similar approach was also taken for the selection of the recipients for a prestigious prize recognising graduate student research, the Kaluza Prize.

These examples highlight that reform of research assessment is possible right now by anyone or any organisation with a stake in the progress of science.

One common feature among funding agencies with newer approaches to research assessment is that applicants are often asked to restrict the evidence that supports their application to a limited number of research contributions. This emphasises quality over quantity. With fewer research papers to consider, there is greater chance that the evaluators can focus on the science, rather than the journal in which it is published. It would be encouraging if more of these policies also explicitly considered outputs beyond publications and included resources such as major datasets, resources and software, a move made by the US National Science Foundation in January 2013. After all, the accomplishments of scientists cannot be measured in journal articles alone.

There have been at least two initiatives that focus on metrics and indicators at the article level, from US standards' agency NISO and UK's higher education body HEFCE. Although moves towards a major reliance on such metrics and indicators in research assessment are premature, and the notion of an "article impact factor" is fraught with difficulty, with the development of standards, transparency and improved understanding of these metrics, they will become valuable sources of evidence of the reach of individual research outputs, as well as tools to support new ways to navigate the literature.

As more and more examples appear of practices that don't rely on impact factors and journal names, scientists will realise that they might not be as trapped by a single metric as they think. Reform will help researchers by enabling them to focus on their research and help society by improving the return on the public investment in science.

Explore further: Success for scientists in the academic job market is highly predictable

add to favorites email to friend print save as pdf

Related Stories

Nobel winning scientist to boycott top science journals

Dec 10, 2013

(Phys.org) —Randy Schekman winner (with colleagues) of the Nobel Prize this year in the Physiology or Medicine category for his work that involved describing how materials are carried to different parts ...

Journals and researchers must respond to Schekman's move

Dec 13, 2013

Having climbed all the way to the Nobel Prize on a ladder made of papers published in Nature, Science and Cell, biologist Randy Schekman has declared that he is now going to boycott these luxury journals beca ...

Recommended for you

James Watson's Nobel Prize to be auctioned

Nov 25, 2014

Missed the chance to bid on Francis Crick's Nobel Prize when it was auctioned off last year for $2.27 million? No worries, you'll have another chance to own a piece of science history on Dec. 4, when James D. Watson's 1962 ...

Engineers develop gift guide for parents

Nov 21, 2014

Faculty and staff in Purdue University's College of Engineering have come up with a holiday gift guide that can help engage children in engineering concepts.

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

TimLong2001
1 / 5 (3) Jun 12, 2014
Another method of eveluation that is questionable is "peer review" which tends to maintain the status quo and stifle innovation -- thus promoting mediocrity.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.