Allocation of resources in the scientific community is increasingly based on various quantitative indicators. One of the most important indicators of scientific quality is how often research is cited. However, a new doctoral thesis in theory of science from the University of Gothenburg shows that the number of citations is a poor measurement of the quality of research.
'Citations occur when a researcher provides a reference to previous research results to for example back up a claim. However, references can be made for many different reasons,' says the author of the thesis Gustaf Nelhans, PhD in theory of science at the University of Gothenburg and lecturer in library and information science at the University of Borås.
Researchers sometimes refer to previous research to indicate the source of certain influences or to identify past work that they want to develop further. But they may also cite previous work in order to argue against it or maybe even refute it entirely. And sometimes sources are referred to out of tradition or routin-like because everybody else in a field seems to do it.
'The conclusion of this is that the number of times research is cited is a rather poor indicator of its scientific quality nor that more citations automatically means high quality, says Nelhans.
As a result of the so-called citation culture that has emerged in the scientific community, an increasing number of researchers have started to present their studies not only with the obvious goal of promoting the content, but also with an aim to attract as many citations as possible. The purpose of this is to gain acknowledgement in the scientific community and secure research funding.
On the other hand Nelhans argues: You can on the other hand say that a cited article has been 'used' by the later literature and that it therefore is 'visible'. But this has to be done carefully since citations only show up for certain publications, namely articles published in certain peer-reviewed and internationally distributed scientific journals.
Nelhans' thesis points to how the awareness of the effects of citations on research has led to them being perceived as hard currency in the scientific community, from the national level down to the individual researcher at his or her department.
'The problem is that citation statistics offer a complex measurement that hides at least as much information as it reveals. It is therefore important to see the whole extent of this phenomenon and not treat citations as an automatic measure,' says Nelhans, who urges decision-makers to be more careful when basing allocation of research funding on citation statistics.
Explore further: Scientists who share data publicly receive more citations
More information: hdl.handle.net/2077/33516