Does science need 'open evaluation' in addition to 'open access?'

Nov 14, 2012

In an editorial accompanying an ebook titled "Beyond open access: visions for open evaluation of scientific papers by post-publication peer review," Nikolaus Kriegeskorte argues that scientists, not publishers, are in the best position to develop a fair evaluation process for scientific papers. The ebook, published today in Frontiers, compiles 18 peer-reviewed articles that lay out detailed visions on how an transparent, open evaluation (OE) system could work for the benefit of all science. This transparency is paramount because the evaluation process is the central steering mechanism of science and influences public policy as well. The authors are from a wide variety of disciplines including neuroscience, psychology, computer science, artificial intelligence, medicine, molecular biology, chemistry, and economics.

"Peer reviews should be made public information, like the scientific papers themselves. In a lot of ways, the network of is similar to a neural network. Each paper or peer review could be seen as a neuron with excitatory and inhibitory connections, and this information is vital in judging the value of its results," says Kriegeskorte, researcher at the University of Cambridge.

Yet unlike the richly interactive and ongoing activity in a , the current peer review process is typically limited to 2-4 reviewers and remains fossilized in pre-publication phase. According to Kriegeskorte, secretive and time-limited pre-publication peer review is no longer the optimal system. He writes, "Open evaluation, an ongoing post-publication process of transparent peer review and rating of papers, promises to address the problems of the current system. However, it is unclear how exactly such a system should be designed."

To explore possible design solutions for OE, Kriegeskorte and his student Diana Deca launched a Research Topic at Frontiers—where a researcher chooses a topic and invites his or her peers to contribute an article. And while Kriegeskorte was expecting a diverging series of solutions, he says that the visions turned out to be largely convergent: the evaluation of papers should be completely transparent, post-publication, perpetually ongoing, and backed by modern statistical methods for inferring the quality of papers; and the system should provide a plurality of perspectives on the literature.

According to Kriegeskorte, transparency is the antidote to corruption and bias. "Science will continue to rely on , because it needs explicit expert judgments, rather than media buzz, to evaluate papers." He suggests a two-step process based on a fundamental division of powers. In the first step after a manuscript is published online, anyone can publicly post a review or rate the paper. In the second step, independent web-portals to the literature combine all the evaluations to give a prioritized perspective on the literature.

The scoring system could simply be an average of all of the ratings. But different web-portals would weight varying scales and individual reviewers differently. In the end, he believes, "the important thing is that scientists themselves take on the challenge of building the central steering mechanism for science: its evaluation system."

Explore further: Russia turns back clocks to permanent Winter Time

add to favorites email to friend print save as pdf

Related Stories

Peer review option proposed for biodiversity data

Oct 25, 2012

Data publishers should have the option of submitting their biodiversity datasets for peer review, according to a discussion paper commissioned by the Global Biodiversity Information Facility (GBIF).

Federal peer review may be overstretched and error prone

Jan 10, 2011

The federal peer review system, by which research proposals are judged worthy for funding, may be "over stretched" and "susceptible to error," said Elmer Yglesias, a researcher at the Science and Technology Policy Institute ...

Online game aims to improve scientific peer review accuracy

Nov 09, 2011

Peer review of scientific research is an essential component of research publication, the awarding of grants, and academic promotion. Reviewers are often anonymous. However, a new study by researchers at the Johns Hopkins ...

Recommended for you

Russia turns back clocks to permanent Winter Time

5 hours ago

Russia on Sunday is set to turn back its clocks to winter time permanently in a move backed by President Vladimir Putin, reversing a three-year experiment with non-stop summer time that proved highly unpopular.

Cloning whistle-blower: little change in S. Korea

Oct 24, 2014

The whistle-blower who exposed breakthrough cloning research as a devastating fake says South Korea is still dominated by the values that allowed science fraudster Hwang Woo-suk to become an almost untouchable ...

Color and texture matter most when it comes to tomatoes

Oct 21, 2014

A new study in the Journal of Food Science, published by the Institute of Food Technologists (IFT), evaluated consumers' choice in fresh tomato selection and revealed which characteristics make the red fruit most appealing.

User comments : 17

Adjust slider to filter visible comments by rank

Display comments: newest first

marble89
4.3 / 5 (6) Nov 14, 2012
This is a great idea that is long overdue.
Torbjorn_Larsson_OM
3.7 / 5 (3) Nov 14, 2012
It is certainly consistent with the post-publication process that is inherent in science and is its oldest roots to boot.

Besides the publishers leeching off of the current publication system, it makes a fine balance where any idea or result should be reviewed but crackpot ideas shouldn't be publicly sanctioned.

Weighting (reviewing) reviewers would certainly prevent both that and cronyism. So it could, and should, be tested. Posthaste.
ValeriaT
1 / 5 (1) Nov 14, 2012
The public review has a many synergies - for example, it can accelerate the development of many ideas in discussions. As everything, it has its caveat too - for example, just the most loud people tend to become most biased. Another problem is, the public review may be unreliable and infective in prohibiting of frauds. In this moment it still appears like the generally positive trend for me.
ValeriaT
3.7 / 5 (3) Nov 14, 2012
It is certainly consistent with the post-publication process
Because I do like balanced approaches instinctively, it seems for me, it would be optimal if the post-publication process would complement the pre-publication validation, but it would not replace it completely. The scientists should simply maintain both approaches. The problem is their matrix is not symmetric: only published results can become a subject of public review, whereas the anonymous review can be applied both before, both after publication.
HannesAlfven
2.5 / 5 (4) Nov 15, 2012
It's actually just one of many things that badly need to be fixed with our system of science and science education. Perhaps the biggest problem is that many like to imagine that everything is working so well.
Squirrel
not rated yet Nov 15, 2012
The papers with free downloads can be found here http://www.fronti..._for/137

antialias_physorg
5 / 5 (1) Nov 15, 2012
Nikolaus Kriegeskorte argues that scientists, not publishers, are in the best position to develop a fair evaluation process for scientific papers.

I completely agree. Peer review, as it is now, isn't half bad. But taking it to those who know how to do it best is a step to make it better.

However, it is unclear how exactly such a system should be designed

Therein lies the rub. The advantage that the current system has is: due to the distributed nature of journals the review process is distributed (i.e. there is no way to put any political pressures on it that will encompass ALL journals and keep 'unwanted publications' out of circulation).
A new system will have to make sure it keeps this decentralized approach - otherwise there is the danger of institutional bias.

completely transparent, post-publication, perpetually ongoing

This part (although laudable) looks like lot of effort. I wonder who will have the time to do all these perpetual reviews.
antialias_physorg
4.2 / 5 (5) Nov 15, 2012
In the first step after a manuscript is published online, anyone can publicly post a review or rate the paper.
In the second step, independent web-portals to the literature combine all the evaluations to give a prioritized perspective on the literature. The scoring system could simply be an average of all of the ratings.

That part doesn't sound like a good idea. Papers are highly specific and require deep immersion in the subject (otherwise they're likely to be misunderstood - even from highly educated people in slightly different subjects). It often happens that one out of 5 reviewers doesn't understand the paper in his OWN specialty he's supposed to review as it is right now.

So review by people from different specialties (and averaging independent of proficiency on the subject) is arguably worse than useless. THAT will surely distort the review process in a very bad way.
antialias_physorg
5 / 5 (5) Nov 15, 2012
and remains fossilized in pre-publication phase

I would argue for keeping it in the pre-publication phase. Review has to be anonymous. If you have a chance to find out who the paper is from that you are reviewing you're introducing all kinds of potential bias.
(E.g. if the author has published several crank papers there is no way that any new stuff of his will be reviewed objectively if the reviewers can just go look it up on open access)

Maybe do it like this: Submit a paper to open access with the wish to have it reviewed (or not). If you don't wish for a review it will be flagged as such and put on open access.

If you do wish for a review it will FIRST go through the (anonymous) review process and then be revised by the author (if needed) and re-reviewed until no further criticism from reviewers arise (much like it is done today in the review process)
OR the author decides to put it on open access as it is after some review round - with ALL reviews attached.
Eikka
5 / 5 (3) Nov 15, 2012
In the first step after a manuscript is published online, anyone can publicly post a review or rate the paper. In the second step, independent web-portals to the literature combine all the evaluations to give a prioritized perspective on the literature. The scoring system could simply be an average of all of the ratings.


This is a horrible idea.

Think of all the crackpots, lunatics and political zealots going through controversial articles and generating walls of text so tall and wide and impenetrable that the reviews themselves would need professional reviewers to weed out all the bullshit.

I mean, simply look at this comment section. Every day it's full of spam from amateurs who believe they've overturned General Relativity with crystal vibrations and ether waves.

And think of the wailing and gnashing of teeth, and conspiracy theories that ensue when you ban the lunatics and shills from reviewing the articles...
cantdrive85
1 / 5 (2) Nov 15, 2012
"The peer review system is satisfactory during quiescent times, but not during a revolution in a discipline such as astrophysics, when the establishment seeks to preserve the status quo." Hannes Alfvén
Tausch
3 / 5 (2) Nov 15, 2012
Open evaluation is worth discussing and developing. And - as marbles said - overdue.

Of course as Eikka stated:
...simply look at this comment section.


as a guide to consider what open evaluation schemes must avoid.
Thks squirrel for the posted link.
antialias_physorg
5 / 5 (2) Nov 15, 2012
Oop, you have no time neither patience to build and maintain such a black lists? Just use Edd Witten's (or some other celebrity's) lists

The whole point of anonymous peer reviewis that personal tastes don't come into it when judging whether a paper is good or not. Science should be divested from the influence of having a 'famous' or 'not so famous' scientist write it. The material must be able to speak for itself (otherwise it isn't scince but fiction).

So introducing personalized (or shared) white/blacklists is a very bad idea. That way we'd very soon get the problem that some people decry on this site: censorship of 'unwanted truths'. And this time it would be real.

Maggnus
2.3 / 5 (3) Nov 15, 2012
I`m with AP here, the reviews really must remain anonymous, and further they should be anonymous on both sides, for the same reason. That is, when the poster of a paper looks at the reviewers, his responce should not be biased based on the person doing the review, but rather focussed on the critiques in the review.
Regardless, I suggest this will need to be an evloving system. While it has to start somewhere, I suggest it will have to eveolve to continue to give relevent feedback to both the posters of papers and the reviewers of them.
marble89
3.7 / 5 (3) Nov 16, 2012
In a very general sense we might look at the history of wikipedia as a clue to how this might work. The concept behind wikipedia, that anyone can edit the encyclopedia, was ridiculed in its early days as unworkable. I was one of those skeptics. It is not perfect, but because many professionals now use it every day the content is usually balanced and current
marble89
1 / 5 (2) Nov 16, 2012
This also addresses the main gripe many of the "anti science " folks have: The lack of cross disciplinary peer review or "self policing". Climate "science" is the worst example. We now have sociologist,economist, mathematicians, etc. publishing papers based on completely inappropriate use of datasets from other fields. Since only people in their own narrow disciplines review the research a lot of crap gets through to the general public
Jotaf
5 / 5 (2) Nov 18, 2012
For a real-life working example of post-publication discussion, see the ICML this year (International Conference on Machine Learning for those from other fields). Here's an example discussion page for a particular paper:

http://icml.cc/di...291.html

I think it's great for detailed discussions, which often include reviews, as well as additional background and non-obvious links to other papers.

I'm not so keen on making the leap from this to ratings because of the issues raised before. Detailed, non-anonymous comments are one thing; uninformed ratings without a detailed analysis or review would just add noise and reinforce popularity trends (which already exist anyway, and don't need another echo chamber).