Tuesday 3 March 2020

With open peer review, open is just the beginning

Abstract

Open peer review does not just mean publishing existing reviewer reports, but should also lead to writing reports primarily for the public. We make a specific proposal for structured reviewer reports, based on the three criteria validity, interest and clarity.
This post is partly based on a joint proposal with Anton Akhmerov for improving the structure of reviewer reports at SciPost. Feedback from Jean-Sébastien Caux on that proposal is gratefully acknowledged.

 

Benefits of open peer review: the obvious and the less obvious


In the traditional model of academic peer review, reviewers’ reports on the submitted article are kept confidential, and this is a big source of inefficiency and waste. If the article is published, the readers can neither assess how rigorous the process was, nor benefit directly from the reviewers’ insights. If it is rejected, the work has to start all over again at another journal.

Open peer review, defined here as making the reports public, could help journals remedy the penury of reviewers: if applied to rejected articles, by avoiding duplicating effort, and if coupled with naming reviewers, by giving them better incentives to do the work. However, the consequences of open peer review may be more far-reaching. Published reports can indeed be used for evaluating the article’s interest and quality. In aggregate, they could be used for evaluating journals and researchers. For these purposes, they would certainly be better than citation counts.

While confidential reports are written for the article’s authors and the journal’s editors, published reports are also written for the public. Eventually, they may be written primarily for the public. This will necessarily change how reports are written. In order to guide this change and to take full advantage of it, journals should not just make the existing reports public. They should also change the reports’ structure, in order to make reports more readable, and to make it easier to compare reports on different articles. In particular, the reports could help potential readers decide whether or not to read an article. Ideally, the reports’ format and structure should be journal-independent. And peer review need not be conducted solely in journals, but possibly also in platforms that would not make publish/reject decisions.

 

Widespread adoption of open peer review


Like open access, open peer review is on its way to becoming standard, with even Nature tentatively joining in. See Wikipedia for more details. It is therefore time to rethink reports.

As a part of the scientific publishing system, the current organization of peer review is not determined by logic or efficiency, but by a historical trajectory that comes from paper-based communication in small communities. Open and frictionless electronic communication must change the system profoundly, and lead to a new system that differs as much from the old as Wikipedia differs from the extinct paper-based encyclopedias. In order to build this new system, a good start would be to take inspiration from the online platforms that succeed in collaboratively generating good content, such as StackExchange and Wikipedia.

Structured reports: a proposal


The idea is to ask the three main questions that are relevant to potential readers: is the article correct? is it worthy of attention? is it well-written? We want to answer each question by texts summarized in ratings – equivalently, by ratings backed by explanations. By a rating we mean a choice among four possible qualitative characterizations.



A few more details on the proposal:
  1. Open participation: in addition to invited reviewers, anyone could be allowed to write reports, starting with the authors themselves. However, all fields are optional: for example, some non-experts might comment on the clarity only.
  2. Quick exchanges between reviewers, authors and readers: in the fashion of StackExchange, it would be good to allow short comments below each text field. Reports and short comments should appear online immediately: any vetting should be done a posteriori. Moreover, small corrections to the reports should be allowed.
  3. Official ratings could be given by journal editors when concluding the peer review process. These official ratings could be displayed prominently (possibly in graphic form), in order to ease comparisons between papers, and to provide benchmarks to future reviewers.
  4. Inciting reviewers to renounce anonymity: the anonymity checkbox comes at the start, with No as the default choice.
  5. Versioning: since an article can have several versions, there should be a mechanism for the reports to be linked to these versions.

 

Case study: Open peer review at SciPost and how to improve it

 

To be specific, let us focus on the structure of reports at SciPost, a recently born family of open access, open peer review journals. In a few years, SciPost reviewers have written hundreds of publicly available structured reports. While not attempting a systematic analysis of these reports, we will discuss how their structure could be improved.


 Main possible improvements to these reports:
  1. Eliminate the recommendation to publish or not, which is journal-dependent and of little interest to the public. From a well-structured report, a journal editor should easily be able to infer a recommendation. The hope is also to free reviewers from thinking about this difficult and artificial issue, and to diminish their power with respect to authors. This could make exchanges with authors more constructive.
  2. Avoid vaguely defined text fields like Strengths and Weaknesses: a more precise structure would be better.
  3. Too many areas, too many ratings: 6 rating areas with 6 or 7 possible ratings per area. Reviewers end up giving ratings more or less arbitrarily, or not at all.
  4. The ratings should come with explanatory text fields, in other words, they should emerge from the report’s structure.

No comments:

Post a Comment