Thursday 24 November 2022

No more rejections: eLife reinvents the academic journal

Academic journals as gatekeepers

Nowadays, the role of academic journals is not so much disseminating information, as choosing which submitted articles to accept or to reject for publication. Gatekeeping was surely important when journals were actually printed, in order to save paper and shelf space. Nowadays, it serves as a token of quality, which can make or break the careers of the articles’ authors.

The journal’s publishing decisions are based on the peer review process, which involves the work of reviewers and editors. The process can be fair — or biased, it can be rigorous — or cursory, it can be constructive — or confrontational. How would we know, when the result is only one bit of publicly available information? Not even one bit actually, since when an article is rejected, we do not even know that it was submitted.

In good cases, the process can in fact yield more useful information: because it results in the article being improved, and/or because the reviewers’ reports are made public. However, this generally does not occur when the article is rejected, even in best-practices-in-their-field journals such as SciPost.

eLife’s new model

To avoid wasting the reviewers’ work, and to make it impossible to judge articles by the journals that publishes them, the obvious solution is to eliminate accept/reject decisions. This is the basis of eLife’s new model, announced here. As with most radical progress, the challenge is to succeed in spite of the system’s corruption and its perverse incentives. In eLife’s case, this requires that the resulting articles count as publications for whoever manages the authors’ careers. To this end, eLife is assigning DOIs, and the authors can designate an official Version of Record.

Will eLife will thrive or perish? Will its new model be emulated? Will much-needed systemic change follow? Hard to tell. What I can do for the moment is to discuss the model’s logic in more detail, and point its strengths and weaknesses.

The editor is back

A striking feature of the new model is the crucial role of editors. In old times, editors used to run their journals without help from reviewers — Einstein was outraged when presented with a report from an anonymous reviewer in 1936, for the first time in his career. Nowadays, editors delegate much work and responsibility to reviewers, who are often asked to provide explicit recommendations on whether to accept or reject submitted articles.

In eLife’s new model, reviewers still write reports, but they can focus on giving their scientific opinion without worrying about journal’s publication criteria. Editors now make important decisions:

  • Performing an initial screening to decide whether the submitted article deserves consideration, or should be desk-rejected. Given the large numbers of articles that are written nowadays, some sort of screening is unavoidable.

  • After peer review has been done, rating the article for significance and strength of support.

The purpose of the rating system is to help readers quickly decide whether the article is worthy of trust and attention. Crucially, ratings are not optional and delegated to reviewers as in SciPost: they are done systematically, and involve the editors. This will hopefully allow the ratings to be done consistently and meaningfully, by people who are well-placed to compare many articles. To evaluate the quality and interest of an article, explicit ratings are potentially much better than a crude accept/reject decision.

This system requires a lot of work and commitment from editors: this could well be a weakness. However, not rejecting articles and doing all the review work in public can certainly save a lot of the editors’ and reviewers’ time, if not at eLife in particular, at least at a systemic level.

More details on the rating system

The rating system or “eLife assessment” system is described here:

  • 5 ratings for significance of findings, from useful to landmark.

  • 6 ratings for strength of support, from inadequate to exceptional.

These ratings are supposed to be embedded in a short text that summarizes the editors and reviewers’ opinion of the article. Having standard ratings promises to allow easy comparison between articles, and should be very valuable to readers. At the same time, it makes it possible to do statistics and build metrics, with all their potential for misuse. These metrics would however be directly based on an explicit evaluation of the article, and could therefore be more meaningful than citation numbers, let alone journal impact factors.

This rating system is not far from what I have been proposing in general and in particular. Possible improvements to the eLife system could include letting authors suggest ratings for their own articles, and adding a rating for clarity i.e. quality of writing and presentation.


  • Thoughtful blog post by Sophien Kamoun.

  • The usual criticism that researchers have to play by the rules of the system, and would commit suicide by participating in eLife’s new model. By this logic, eLife should have been content to perform good quality open peer review: attempts at further improvement are doomed.

  • HHMI’s enthusiastic support. After Plan S, another case of funders pushing researchers towards good practices.