Thursday, 24 November 2022

No more rejections: eLife reinvents the academic journal

Academic journals as gatekeepers

Nowadays, the role of academic journals is not so much disseminating information, as choosing which submitted articles to accept or to reject for publication. Gatekeeping was surely important when journals were actually printed, in order to save paper and shelf space. Nowadays, it serves as a token of quality, which can make or break the careers of the articles’ authors.

The journal’s publishing decisions are based on the peer review process, which involves the work of reviewers and editors. The process can be fair — or biased, it can be rigorous — or cursory, it can be constructive — or confrontational. How would we know, when the result is only one bit of publicly available information? Not even one bit actually, since when an article is rejected, we do not even know that it was submitted.

In good cases, the process can in fact yield more useful information: because it results in the article being improved, and/or because the reviewers’ reports are made public. However, this generally does not occur when the article is rejected, even in best-practices-in-their-field journals such as SciPost.

eLife’s new model

To avoid wasting the reviewers’ work, and to make it impossible to judge articles by the journals that publishes them, the obvious solution is to eliminate accept/reject decisions. This is the basis of eLife’s new model, announced here. As with most radical progress, the challenge is to succeed in spite of the system’s corruption and its perverse incentives. In eLife’s case, this requires that the resulting articles count as publications for whoever manages the authors’ careers. To this end, eLife is assigning DOIs, and the authors can designate an official Version of Record.

Will eLife will thrive or perish? Will its new model be emulated? Will much-needed systemic change follow? Hard to tell. What I can do for the moment is to discuss the model’s logic in more detail, and point its strengths and weaknesses.

The editor is back

A striking feature of the new model is the crucial role of editors. In old times, editors used to run their journals without help from reviewers — Einstein was outraged when presented with a report from an anonymous reviewer in 1936, for the first time in his career. Nowadays, editors delegate much work and responsibility to reviewers, who are often asked to provide explicit recommendations on whether to accept or reject submitted articles.

In eLife’s new model, reviewers still write reports, but they can focus on giving their scientific opinion without worrying about journal’s publication criteria. Editors now make important decisions:

  • Performing an initial screening to decide whether the submitted article deserves consideration, or should be desk-rejected. Given the large numbers of articles that are written nowadays, some sort of screening is unavoidable.

  • After peer review has been done, rating the article for significance and strength of support.

The purpose of the rating system is to help readers quickly decide whether the article is worthy of trust and attention. Crucially, ratings are not optional and delegated to reviewers as in SciPost: they are done systematically, and involve the editors. This will hopefully allow the ratings to be done consistently and meaningfully, by people who are well-placed to compare many articles. To evaluate the quality and interest of an article, explicit ratings are potentially much better than a crude accept/reject decision.

This system requires a lot of work and commitment from editors: this could well be a weakness. However, not rejecting articles and doing all the review work in public can certainly save a lot of the editors’ and reviewers’ time, if not at eLife in particular, at least at a systemic level.

More details on the rating system

The rating system or “eLife assessment” system is described here:

  • 5 ratings for significance of findings, from useful to landmark.

  • 6 ratings for strength of support, from inadequate to exceptional.

These ratings are supposed to be embedded in a short text that summarizes the editors and reviewers’ opinion of the article. Having standard ratings promises to allow easy comparison between articles, and should be very valuable to readers. At the same time, it makes it possible to do statistics and build metrics, with all their potential for misuse. These metrics would however be directly based on an explicit evaluation of the article, and could therefore be more meaningful than citation numbers, let alone journal impact factors.

This rating system is not far from what I have been proposing in general and in particular. Possible improvements to the eLife system could include letting authors suggest ratings for their own articles, and adding a rating for clarity i.e. quality of writing and presentation.


  • Thoughtful blog post by Sophien Kamoun.

  • The usual criticism that researchers have to play by the rules of the system, and would commit suicide by participating in eLife’s new model. By this logic, eLife should have been content to perform good quality open peer review: attempts at further improvement are doomed.

  • HHMI’s enthusiastic support. After Plan S, another case of funders pushing researchers towards good practices.

Tuesday, 4 October 2022

Suprisingly good CNRS guidelines on open access

When it comes to good practices, research institutions are often good at declarations of principles, and not so good at implementation. For example, it is easy to declare that research assessment should be qualitative and not rely too much on bibliometrics, but harder to do it in practice.

This is why I am pleasantly surprised by recent CNRS guidelines on open access. These guidelines take the form of a letter to CNRS researchers who have to fill yearly reports on their work. 


The salient points about open access are:

  • Over the last three years, the proportion of articles by CNRS researchers that are openly accessible rose from 49% to 77%. (OK, these data mean little in the absence of more details.)

  • CNRS is adopting a rights retention strategy, as proposed by Coalition S: it recommends that all articles be distributed under a CC-BY license. In particular, this allows the articles to be made openly accessible right at the time of publication.

  • CNRS is not asking researchers for their lists of publications: instead, CNRS just takes publication lists from the national preprint archive HAL.

  • The weak point of all this seems to be the impractical and clunky nature of HAL. However, as reminded in the letter, HAL is increasingly interoperable with disciplinary archives such as ArXiv and BiorXiv. And indeed, my recent articles were automatically harvested by HAL from ArXiv.

This goes in the direction of having a strong open access mandate while requiring no extra work from researchers. To get there, it remains to make the CC-BY license mandatory, and the upload of articles to HAL fully automatic.

Wednesday, 16 March 2022

The war in Ukraine: directives to French researchers from CEA and CNRS

My previous post reproduced a letter from Maxim Chernodub, suggesting how French scientists could help Ukrainian colleagues, and also calling French scientists to not boycott Russian collaborators. However, it seems that we will not have much choice in the matter, at least if we follow the official directives, which I will paraphrase below.

Directives from CEA (received by email)

Russian scientists may remotely attend online conferences, but only as individual experts, i.e. provided they do not represent an institution. 

Already submitted publications are OK. But it is forbidden to submit a new publication with a coauthor who is affiliated in Russia. 

Press release from CNRS

CNRS is suspending all new scientific collaboration with Russia. Russian researchers who work in France may continue their activities. 


From CNRS we only have a press release so far. The precise directives, when they come, may well look like CEA's. It is unclear how these directives will be enforced. Most scientific journals still allow submissions from Russia-based authors. 

As far as I can tell, such drastic measures against Russian scientists are unprecedented. No similar measures were taken in other cases of human right abuses, including wars of aggression. 

For researchers who want to keep collaborating with Russians without violating the official directives, technical loopholes might help: the directive from CEA is about submitting publications, but preprints are not publications, right? Co-authors may not be affiliated in Russia, but could they publish as private individuals, without citing an affiliation?

The war in Ukraine: a letter from Maxim Chernodub

Since the beginning of the Russian invasion of Ukraine, Western researchers have been wondering how to help our Ukrainian colleagues, and how to behave with our Russian colleagues. A letter by French-Russian-Ukrainian physicist Maxim Chernodub has been circulating, which offers valuable perspective and advice on these issues. (It was written on 27/02/2022.) Below is the text of the letter, reproduced with the author's permission. 

Friday, 31 December 2021

Reforming research assessment: nice declarations, little action?

There seems to be a consensus among universities and research funders that research assessment should not be based on crude quantitative metrics, such as: numbers of articles, numbers of citations, journal impact factors, the h-index, etc. The 2012 San Francisco Declaration on Research Assessment (DORA) formulates principles which could greatly improve research assessment if they were applied, although I would argue that the DORA is misguided in its recommendations to authors. The DORA has been signed by thousands of organizations: just for France this includes the Academy of Sciences, CNRS and HCERES. More recently, the European Commission has issued a report called Towards a reform of the research assessment system, which deals with the same issues and promotes similar principles.

Since the same principles have to be reiterated 9 years later, you may think that little has changed in all that time. And you would be largely right. Significant reforms of research assessment in individual organizations are so rare that they are newsworthy. And some universities are denounced for taking actions that directly contradict the principles they have officially endorsed.

Saturday, 23 January 2021

Open access by decree: a success of Plan S?

Science family of journals announces change to open-access policy”: the title of this Nature news article may sound boring, but the news are not:

Science is allowing authors to distribute their articles under Creative Commons license, free of charge, without embargo.

Sounds too good to be true? Well, of course, there are caveats. First, this is the author’s accepted version, not the published version. (OK, minor point.) Second, this is a trial policy, which may or may not be made permanent. And third, this only applies to authors who are mandated to do so by their Coalition S funders.

Who, you may ask, are these happy authors? I will defer to Wikipedia for reminders about Plan S and Coalition S, and move to the next question: how did Coalition S achieve this? Last summer, Coalition S announced a Rights Retention Strategy that mandates fundees to distribute their work under CC license, without embargo, no matter what the publisher makes them sign. In their own words,

This public licence and/or agreed prior obligations take legal precedence over any later Licence to Publish or Copyright Transfer Agreement that the publisher may ask the author to sign.

This type of “open access by decree” may sound like little more than an exercise in blame-shifting. To an author who is caught between a funder and a journal’s incompatible demands, it should not matter who blacklists the other: the author cannot publish in that journal. However, Plan S got bad press among some researchers for appearing to blacklist journals. Now it is the journals who will have to blacklist funders, and to reject submitted articles for technicalities.

The success of the maneuver depends on publishers’ reactions. Coalition S is too big to ignore, so publishers have to take a stand. Nature journals have recently announced their adoption of Gold open access options, for a charge of the order of 9.500 euros per article. This was denounced as outrageously expensive. Björn Brembs even took it as a refutation of the widespread idea that the author-pays model would lead to lower costs.

However, Coalition S-funded authors can now choose between publishing in Science for free, or in Nature for 9.500 euros. It does seem that Science is competing on price! Not so fast: Nature’s high price is partly an illusion, as it comes in the context of transformative agreements, which are supposed to transition Springer Nature’s journals to open access. The idea is that academic institutions’ journal subscriptions would also cover open access publishing of their researcher’s articles. For an author who is covered by a transformative agreement, publishing open access in Nature is effectively free, which may be why Science had to offer the same for free, just to stay competitive.

At this stage, it is not clear which authors will face which choices of open access options and pricing. It is therefore a bit early for seeing the effects of the Rights Retention Strategy. At least, we now have the admission by an important publisher that open access is not an extra service that should bring them extra revenue.

Friday, 4 September 2020

Does this covariant function belong to some 2d CFT?

In conformal field theory, correlation functions of primary fields are covariant functions of the fields’ positions. For example, in two dimensions, a correlation function of N diagonal primary fields must be such that
$$\begin{aligned} F(z_1,z_2,\cdots , z_N) = \prod_{j=1}^N {|cz_j+d|^{-4\Delta_j}}  F\left(\tfrac{az_1+b}{cz_1+d},\tfrac{az_2+b}{cz_2+d},\cdots , \tfrac{az_N+b}{cz_N+d}\right) \ , \end{aligned}$$
where zj ∈ ℂ are the fields’ positions, Δj ∈ ℂ their conformal dimensions, and $\left(\begin{smallmatrix} a& b \\ c& d \end{smallmatrix}\right)\in SL_2(\mathbb{C})$ is a global conformal transformation. In addition, there are nontrivial relations between different correlation functions, such as crossing symmetry. But given just one covariant function, do we know whether it belongs to a CFT, and what can we say about that CFT?

In particular, in two dimensions, do we know whether the putative CFT has local conformal symmetry, and if so what is the Virasoro algebra’s central charge?

Since covariance completely fixes three-point functions up to an overall constant, we will focus on four-point functions i.e. N = 4. The stimulus for addressing these questions came from the correlation functions in the Brownian loop soup, recently computed by Camia, Foit, Gandolfi and Kleban. (Let me thank the authors for interesting correspondence, and Raoul Santachiara for bringing their article to my attention.)

Doesn’t any covariant function belong to multiple 2d CFTs?

In conformal field theory, any correlation function can be written as a linear combination of s-channel conformal blocks. These conformal blocks are a particular basis of smooth covariant functions, labelled by a conformal dimension and a conformal spin. (I will not try to say preciely what smooth means.) In two dimensions, we actually have a family of bases, parametrized by the central charge c, with the limit c = ∞ corresponding to global conformal symmetry rather than local conformal symmetry.

Thursday, 27 August 2020

(2/2) Open access mystery: why did ERC backstab Plan S?

In my first post about the ERC’s recent withdrawal from supporting Plan S, I tried to explain ERC’s announcement using publicly available information on the ERC, Plan S, and their recent news. The potential dangers of this approach were to miss relevant pieces of information, and to give too much weight to calendar coincidences.

We are still waiting for a detailed and convincing explanation from the ERC, and for a description of their open access strategy if they still have one. Meanwhile, I would like to complete the picture based on informal contacts with a few well-informed colleagues. There emerge two potential lines of explanation.

Wednesday, 22 July 2020

(1/2) Open access mystery: why did the ERC backstab Plan S?

The European Research Council (ERC) just announced that they would withdraw their support for Coalition S, the consortium of research funders behind Plan S. Plan S is the valiant but not universally welcome attempt to impose strong open access requirements to research articles, without paying more money to publishers.

The ERC is Europe’s most prestigious research funder, and a main backer of Plan S. Without Plan S, the ERC has no open access strategy, and without the backing of ERC, Coalition S may not be big enough for succeeding. Why would ERC make this U-turn? I do not know, but let me gather a few potentially relevant pieces of the puzzle. The pieces are of three types:
  • some context on the ERC and more generally on Europe’s research plans,
  • the recently announced rights retention strategy by Coalition S,
  • ERC’s meager and not very credible justification for their withdrawal.

Tuesday, 3 March 2020

With open peer review, open is just the beginning


Open peer review does not just mean publishing existing reviewer reports, but should also lead to writing reports primarily for the public. We make a specific proposal for structured reviewer reports, based on the three criteria validity, interest and clarity.
This post is partly based on a joint proposal with Anton Akhmerov for improving the structure of reviewer reports at SciPost. Feedback from Jean-Sébastien Caux on that proposal is gratefully acknowledged.


Benefits of open peer review: the obvious and the less obvious

In the traditional model of academic peer review, reviewers’ reports on the submitted article are kept confidential, and this is a big source of inefficiency and waste. If the article is published, the readers can neither assess how rigorous the process was, nor benefit directly from the reviewers’ insights. If it is rejected, the work has to start all over again at another journal.

Open peer review, defined here as making the reports public, could help journals remedy the penury of reviewers: if applied to rejected articles, by avoiding duplicating effort, and if coupled with naming reviewers, by giving them better incentives to do the work. However, the consequences of open peer review may be more far-reaching. Published reports can indeed be used for evaluating the article’s interest and quality. In aggregate, they could be used for evaluating journals and researchers. For these purposes, they would certainly be better than citation counts.

Wednesday, 27 November 2019

Preprints and other tools of open research: contribution to the Open Access roundtable at GFP 2019

In the context of the conference GFP 2019 on polymer chemistry, I am taking part in a roundtable on Open Access. Chemists are coming quite late to the Open Access debate. The preprint archive Chemrxiv is young, not widely used, and not independent from publishers. The traditional subscription-based publishing system, and the standard bibliometric indicators, dominate communication and evaluation. And when chemists are dragged into the debate by discipline-agnostic initiatives such as Plan S, their positions tend to be conservative.

Inevitably, chemists are being affected by Open Access and other evolutions of the research system, whether or not these evolutions seem beneficial to them. It would be useful for chemists to know more about preprints and other tools of scientific communication, beyond the traditional journals: not only to comply with Open Access mandates, but also to make their own choices among the existing innovations and best practices.

Tuesday, 1 October 2019

Learning scientific writing from great writers

For scientists, writing well (or well enough) is a critical skill, as written texts are essential for communicating research. Of course, not every scientist should be able to write well, as some may rely on collaborators. In a lecture on "Writing physics", David Mermin emphasizes the importance of language and writing through a famous example:
It is also said that even Landau's profound technical papers were actually written by Lifshitz. Many physicists look down on Lifshitz: Landau did the physics, Lifshitz wrote it up. I don't believe that for a minute. If Evgenii Lifshitz really wrote the amazing papers of Landau, he was doing physics of the highest order. Landau was not so much a coauthor, as a natural phenomenon — an important component of the remarkable coherence of the physical world that Lifshitz wrote about so powerfully in the papers of Landau.