Peer reviews and bibliometric analysis should be viewed as complementary rather than determinate

research
Credit: CC0 Public Domain

New research from Queen Mary University of London suggests that when it comes to large-scale research evaluations such as the Research Excellence Framework (REF), peer reviews would be more cost-effective if targeted to publications not appearing in outstanding journals.

The research reveals that peer reviews and bibliometric analysis should be seen as complementary modes of assessment. The researchers suggest that targeting peer reviews to publications whose quality cannot be unambiguously classified using bibliometric analysis would be more effective for assessing research standards in UK universities and a better use of money.

Evaluating research

The researchers used the UK's 2014 REF exercise to study the attributes of top-scoring (four-star) publications in Economics and Econometrics. Although official documents contain aggregate scores for each institution, the researchers show how these aggregates can be used to infer the score awarded by REF panelists to each publication.

The findings demonstrate that this score responds to journal prestige as measured by the Thomson Reuters Article Influence Score. Although the use of this particular metric is justified by its attractive properties, and by previous research, different measures of journal impact such as the are also in use among researchers.

Several econometric analyses confirm the limited contribution of other publication attributes, such as citations, to the score awarded by REF panelists, and that publications in the top generalist and top-five Economics journals are unambiguously awarded four stars.

Implications for future evaluation

The results have important implications for the design of future research evaluations, other than informing the discussion on the 2021 round of REF which is under way. The use of bibliometrics in research evaluation has been intensely debated, and REF regulation relies on reviews by experts reflecting concerns for the responsible use of metrics.

The results show that at least in Economics and Econometrics, an index of journal impact accurately approximates experts' judgment, especially for outputs published in outstanding journals.

Dr. Marco Ovidi, from Queen Mary's School of Economics and Finance, who co-authored the study, says that their "research reveals that, in Economics and Econometrics, the classification of 2014 REF research outputs made by experts is very close to the classification that one would obtain by using a bibliometric indicator of impact."

"The differences are particularly small for research outputs published in high-impact Economics journals. Our results suggest that the costly process of peer reviews should be focused on finding hidden gems in journals with relatively lower reputation rather than overrated outputs in top-scoring outlets."

"We think our findings should be of interest to academic departments and research policy makers as the next research assessments, the 2021 REF, is under way."

More information: Research paper: dipartimenti.unicatt.it/economia-finanza-def106%20(1).pdf

Citation: Peer reviews and bibliometric analysis should be viewed as complementary rather than determinate (2021, July 13) retrieved 23 April 2024 from https://phys.org/news/2021-07-peer-bibliometric-analysis-viewed-complementary.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Medical journal articles written by women are cited less than those written by men

4 shares

Feedback to editors