Friday 19 April 2024

The state and prospects of academic peer review

(A PDF version of this post is available on HAL.)

Peer review plays a crucial role in the communication and management of research. How well is it working? How could it be improved? The answers depend on whom you ask. Here they are debated by seven characters with different points of view.

1. Osman: Now that everyone is here, let us start this debate on academic peer review: should it be strengthened, overhauled, or abandoned? When I arrived, Clementine was already in the room, reviewing a paper and complaining about it. This leads to my first question: is peer review taking too much time and effort? 

2. Clementine: I was indeed complaining, but like most colleagues I realize that peer review is an essential part of our work: this is how we validate each other’s work, this is how we assess its interest and likely impact, and this is how we collectively improve how papers are written. Now, the problem is that so many papers get submitted nowadays, it becomes hard to do a proper job on all of them. Even after declining many invitations to review, I have to be pragmatic and focus on the main question: is the submitted paper worth publishing in that journal? If the paper is obviously good work by well-known people, it can be waved through with minimal scrutiny. If it is mediocre work, the rejection should be short and final – I do not want to get dragged into a technical debate with the authors. If however I am unsure whether the paper should be accepted after a quick first reading, then I need to scrutinize it, and to find a good reason for either rejecting it, or accepting it after some improvements. 

3. Deirdre: So you spend more time reviewing second-rate works than really interesting papers? I am glad I stopped reviewing years ago. 

Friday 21 April 2023

An environmental boycott of Elsevier

Since 2012, thousands of academics have been boycotting the academic publisher Elsevier, whom they blame for overpricing its journals, and more generally for resisting open access to the scientific literature. Of course, most major academic publishers are guilty of the same, but Elsevier stands out as the worst offender. For instance, Elsevier was the last major publisher to join the Initiative for Open citations, years after all the others. Elsevier did not join the Initiative for open abstracts, and they play a leading role in the legal persecution of Sci-Hub.

In February 2022, Elsevier was denounced for helping the fossil fuel industry via its publishing and consulting activities. Again, other publishers are doing that too, and Elsevier is only the worst (or more prominent) offender. This has led to a (no longer active) petition by the Union of Concerned Scientists. Elsevier was not impressed, and now there is a campaign called Stop Elsevier.

This campaign includes a boycott. Participants may commit to

  • Refuse to Review

  • Refuse to Submit

  • Write to Editors

  • Refuse to Edit

  • Take Direct Action

  • Share Boycott on Social Media

There are also various options for allowing these commitments to be publicly shared, as in the original Cost of Knowledge boycott. For the moment, they are still kept private, though.

The new boycott won’t sink Elsevier by itself, but it could strengthen its well-deserved reputation for unprincipled greed. This would help discourage academic institutions from doing business with Elsevier.

Thursday 29 December 2022

Hypocrites in the air, and Barnabas Calder

Since travel by plane is one of the main sources of carbon emissions by researchers, climate scientists who take the plane have been called hypocrites in the air. The expression could be applied to many other researchers, who worry about climate change (without necessarily working on the subject), but fly much more than is really needed.

But what is “really needed” plane travel for a researcher? Surely, we could eliminate much plane travel without compromising our work, by renouncing useless or marginally useful meetings, conferences or visits. Videoconferencing can be helpful: it is often profitable to save the time and expense of travelling, by accepting some loss in communication quality. However, if we take the climate seriously, this cannot be enough. Emissions need not just be halved, but be brought close to zero, and quickly.

Taking the climate seriously implies reducing emissions even when this is detrimental to research – or other activities. Some researchers are already doing it. Barnabas Calder, author of Architecture: from prehistory to climate emergency, explains in his introduction that he did not visit many of the buildings he discusses:

Parts of the book might have been better with first-hand experience of the buildings, but in a world that can ill afford the carbon burden of jet-fuelled travel, the move towards sustainable energy consumption will involve bigger and tougher compromises than this.

Unlike most research subjects, Calder’s work is relevant to addressing climate change, and yet he accepts the handicap of travel restrictions. How does one call the opposite of a hypocrite in the air? A honest scientist on the ground?

Thursday 24 November 2022

No more rejections: eLife reinvents the academic journal

Academic journals as gatekeepers

Nowadays, the role of academic journals is not so much disseminating information, as choosing which submitted articles to accept or to reject for publication. Gatekeeping was surely important when journals were actually printed, in order to save paper and shelf space. Nowadays, it serves as a token of quality, which can make or break the careers of the articles’ authors.

The journal’s publishing decisions are based on the peer review process, which involves the work of reviewers and editors. The process can be fair — or biased, it can be rigorous — or cursory, it can be constructive — or confrontational. How would we know, when the result is only one bit of publicly available information? Not even one bit actually, since when an article is rejected, we do not even know that it was submitted.

In good cases, the process can in fact yield more useful information: because it results in the article being improved, and/or because the reviewers’ reports are made public. However, this generally does not occur when the article is rejected, even in best-practices-in-their-field journals such as SciPost.

eLife’s new model

To avoid wasting the reviewers’ work, and to make it impossible to judge articles by the journals that publishes them, the obvious solution is to eliminate accept/reject decisions. This is the basis of eLife’s new model, announced here. As with most radical progress, the challenge is to succeed in spite of the system’s corruption and its perverse incentives. In eLife’s case, this requires that the resulting articles count as publications for whoever manages the authors’ careers. To this end, eLife is assigning DOIs, and the authors can designate an official Version of Record.

Will eLife will thrive or perish? Will its new model be emulated? Will much-needed systemic change follow? Hard to tell. What I can do for the moment is to discuss the model’s logic in more detail, and point its strengths and weaknesses.

The editor is back

A striking feature of the new model is the crucial role of editors. In old times, editors used to run their journals without help from reviewers — Einstein was outraged when presented with a report from an anonymous reviewer in 1936, for the first time in his career. Nowadays, editors delegate much work and responsibility to reviewers, who are often asked to provide explicit recommendations on whether to accept or reject submitted articles.

In eLife’s new model, reviewers still write reports, but they can focus on giving their scientific opinion without worrying about journal’s publication criteria. Editors now make important decisions:

  • Performing an initial screening to decide whether the submitted article deserves consideration, or should be desk-rejected. Given the large numbers of articles that are written nowadays, some sort of screening is unavoidable.

  • After peer review has been done, rating the article for significance and strength of support.

The purpose of the rating system is to help readers quickly decide whether the article is worthy of trust and attention. Crucially, ratings are not optional and delegated to reviewers as in SciPost: they are done systematically, and involve the editors. This will hopefully allow the ratings to be done consistently and meaningfully, by people who are well-placed to compare many articles. To evaluate the quality and interest of an article, explicit ratings are potentially much better than a crude accept/reject decision.

This system requires a lot of work and commitment from editors: this could well be a weakness. However, not rejecting articles and doing all the review work in public can certainly save a lot of the editors’ and reviewers’ time, if not at eLife in particular, at least at a systemic level.

More details on the rating system

The rating system or “eLife assessment” system is described here:

  • 5 ratings for significance of findings, from useful to landmark.

  • 6 ratings for strength of support, from inadequate to exceptional.

These ratings are supposed to be embedded in a short text that summarizes the editors and reviewers’ opinion of the article. Having standard ratings promises to allow easy comparison between articles, and should be very valuable to readers. At the same time, it makes it possible to do statistics and build metrics, with all their potential for misuse. These metrics would however be directly based on an explicit evaluation of the article, and could therefore be more meaningful than citation numbers, let alone journal impact factors.

This rating system is not far from what I have been proposing in general and in particular. Possible improvements to the eLife system could include letting authors suggest ratings for their own articles, and adding a rating for clarity i.e. quality of writing and presentation.

Reactions

  • Thoughtful blog post by Sophien Kamoun.

  • The usual criticism that researchers have to play by the rules of the system, and would commit suicide by participating in eLife’s new model. By this logic, eLife should have been content to perform good quality open peer review: attempts at further improvement are doomed.

  • HHMI’s enthusiastic support. After Plan S, another case of funders pushing researchers towards good practices.

Tuesday 4 October 2022

Suprisingly good CNRS guidelines on open access

When it comes to good practices, research institutions are often good at declarations of principles, and not so good at implementation. For example, it is easy to declare that research assessment should be qualitative and not rely too much on bibliometrics, but harder to do it in practice.

This is why I am pleasantly surprised by recent CNRS guidelines on open access. These guidelines take the form of a letter to CNRS researchers who have to fill yearly reports on their work. 

 



The salient points about open access are:

  • Over the last three years, the proportion of articles by CNRS researchers that are openly accessible rose from 49% to 77%. (OK, these data mean little in the absence of more details.)

  • CNRS is adopting a rights retention strategy, as proposed by Coalition S: it recommends that all articles be distributed under a CC-BY license. In particular, this allows the articles to be made openly accessible right at the time of publication.

  • CNRS is not asking researchers for their lists of publications: instead, CNRS just takes publication lists from the national preprint archive HAL.

  • The weak point of all this seems to be the impractical and clunky nature of HAL. However, as reminded in the letter, HAL is increasingly interoperable with disciplinary archives such as ArXiv and BiorXiv. And indeed, my recent articles were automatically harvested by HAL from ArXiv.

This goes in the direction of having a strong open access mandate while requiring no extra work from researchers. To get there, it remains to make the CC-BY license mandatory, and the upload of articles to HAL fully automatic.

Wednesday 16 March 2022

The war in Ukraine: directives to French researchers from CEA and CNRS

My previous post reproduced a letter from Maxim Chernodub, suggesting how French scientists could help Ukrainian colleagues, and also calling French scientists to not boycott Russian collaborators. However, it seems that we will not have much choice in the matter, at least if we follow the official directives, which I will paraphrase below.

Directives from CEA (received by email)

Russian scientists may remotely attend online conferences, but only as individual experts, i.e. provided they do not represent an institution. 

Already submitted publications are OK. But it is forbidden to submit a new publication with a coauthor who is affiliated in Russia. 

Press release from CNRS

CNRS is suspending all new scientific collaboration with Russia. Russian researchers who work in France may continue their activities. 

Interpretation

From CNRS we only have a press release so far. The precise directives, when they come, may well look like CEA's. It is unclear how these directives will be enforced. Most scientific journals still allow submissions from Russia-based authors. 

As far as I can tell, such drastic measures against Russian scientists are unprecedented. No similar measures were taken in other cases of human right abuses, including wars of aggression. 

For researchers who want to keep collaborating with Russians without violating the official directives, technical loopholes might help: the directive from CEA is about submitting publications, but preprints are not publications, right? Co-authors may not be affiliated in Russia, but could they publish as private individuals, without citing an affiliation?

The war in Ukraine: a letter from Maxim Chernodub

Since the beginning of the Russian invasion of Ukraine, Western researchers have been wondering how to help our Ukrainian colleagues, and how to behave with our Russian colleagues. A letter by French-Russian-Ukrainian physicist Maxim Chernodub has been circulating, which offers valuable perspective and advice on these issues. (It was written on 27/02/2022.) Below is the text of the letter, reproduced with the author's permission. 

Friday 31 December 2021

Reforming research assessment: nice declarations, little action?

There seems to be a consensus among universities and research funders that research assessment should not be based on crude quantitative metrics, such as: numbers of articles, numbers of citations, journal impact factors, the h-index, etc. The 2012 San Francisco Declaration on Research Assessment (DORA) formulates principles which could greatly improve research assessment if they were applied, although I would argue that the DORA is misguided in its recommendations to authors. The DORA has been signed by thousands of organizations: just for France this includes the Academy of Sciences, CNRS and HCERES. More recently, the European Commission has issued a report called Towards a reform of the research assessment system, which deals with the same issues and promotes similar principles.

Since the same principles have to be reiterated 9 years later, you may think that little has changed in all that time. And you would be largely right. Significant reforms of research assessment in individual organizations are so rare that they are newsworthy. And some universities are denounced for taking actions that directly contradict the principles they have officially endorsed.

Saturday 23 January 2021

Open access by decree: a success of Plan S?

Science family of journals announces change to open-access policy”: the title of this Nature news article may sound boring, but the news are not:

Science is allowing authors to distribute their articles under Creative Commons license, free of charge, without embargo.

Sounds too good to be true? Well, of course, there are caveats. First, this is the author’s accepted version, not the published version. (OK, minor point.) Second, this is a trial policy, which may or may not be made permanent. And third, this only applies to authors who are mandated to do so by their Coalition S funders.

Who, you may ask, are these happy authors? I will defer to Wikipedia for reminders about Plan S and Coalition S, and move to the next question: how did Coalition S achieve this? Last summer, Coalition S announced a Rights Retention Strategy that mandates fundees to distribute their work under CC license, without embargo, no matter what the publisher makes them sign. In their own words,

This public licence and/or agreed prior obligations take legal precedence over any later Licence to Publish or Copyright Transfer Agreement that the publisher may ask the author to sign.

This type of “open access by decree” may sound like little more than an exercise in blame-shifting. To an author who is caught between a funder and a journal’s incompatible demands, it should not matter who blacklists the other: the author cannot publish in that journal. However, Plan S got bad press among some researchers for appearing to blacklist journals. Now it is the journals who will have to blacklist funders, and to reject submitted articles for technicalities.

The success of the maneuver depends on publishers’ reactions. Coalition S is too big to ignore, so publishers have to take a stand. Nature journals have recently announced their adoption of Gold open access options, for a charge of the order of 9.500 euros per article. This was denounced as outrageously expensive. Björn Brembs even took it as a refutation of the widespread idea that the author-pays model would lead to lower costs.

However, Coalition S-funded authors can now choose between publishing in Science for free, or in Nature for 9.500 euros. It does seem that Science is competing on price! Not so fast: Nature’s high price is partly an illusion, as it comes in the context of transformative agreements, which are supposed to transition Springer Nature’s journals to open access. The idea is that academic institutions’ journal subscriptions would also cover open access publishing of their researcher’s articles. For an author who is covered by a transformative agreement, publishing open access in Nature is effectively free, which may be why Science had to offer the same for free, just to stay competitive.

At this stage, it is not clear which authors will face which choices of open access options and pricing. It is therefore a bit early for seeing the effects of the Rights Retention Strategy. At least, we now have the admission by an important publisher that open access is not an extra service that should bring them extra revenue.

Friday 4 September 2020

Does this covariant function belong to some 2d CFT?

In conformal field theory, correlation functions of primary fields are covariant functions of the fields’ positions. For example, in two dimensions, a correlation function of N diagonal primary fields must be such that
$$\begin{aligned} F(z_1,z_2,\cdots , z_N) = \prod_{j=1}^N {|cz_j+d|^{-4\Delta_j}}  F\left(\tfrac{az_1+b}{cz_1+d},\tfrac{az_2+b}{cz_2+d},\cdots , \tfrac{az_N+b}{cz_N+d}\right) \ , \end{aligned}$$
where zj ∈ ℂ are the fields’ positions, Δj ∈ ℂ their conformal dimensions, and $\left(\begin{smallmatrix} a& b \\ c& d \end{smallmatrix}\right)\in SL_2(\mathbb{C})$ is a global conformal transformation. In addition, there are nontrivial relations between different correlation functions, such as crossing symmetry. But given just one covariant function, do we know whether it belongs to a CFT, and what can we say about that CFT?

In particular, in two dimensions, do we know whether the putative CFT has local conformal symmetry, and if so what is the Virasoro algebra’s central charge?

Since covariance completely fixes three-point functions up to an overall constant, we will focus on four-point functions i.e. N = 4. The stimulus for addressing these questions came from the correlation functions in the Brownian loop soup, recently computed by Camia, Foit, Gandolfi and Kleban. (Let me thank the authors for interesting correspondence, and Raoul Santachiara for bringing their article to my attention.)

Doesn’t any covariant function belong to multiple 2d CFTs?

In conformal field theory, any correlation function can be written as a linear combination of s-channel conformal blocks. These conformal blocks are a particular basis of smooth covariant functions, labelled by a conformal dimension and a conformal spin. (I will not try to say preciely what smooth means.) In two dimensions, we actually have a family of bases, parametrized by the central charge c, with the limit c = ∞ corresponding to global conformal symmetry rather than local conformal symmetry.

Thursday 27 August 2020

(2/2) Open access mystery: why did ERC backstab Plan S?

In my first post about the ERC’s recent withdrawal from supporting Plan S, I tried to explain ERC’s announcement using publicly available information on the ERC, Plan S, and their recent news. The potential dangers of this approach were to miss relevant pieces of information, and to give too much weight to calendar coincidences.

We are still waiting for a detailed and convincing explanation from the ERC, and for a description of their open access strategy if they still have one. Meanwhile, I would like to complete the picture based on informal contacts with a few well-informed colleagues. There emerge two potential lines of explanation.

Wednesday 22 July 2020

(1/2) Open access mystery: why did the ERC backstab Plan S?

The European Research Council (ERC) just announced that they would withdraw their support for Coalition S, the consortium of research funders behind Plan S. Plan S is the valiant but not universally welcome attempt to impose strong open access requirements to research articles, without paying more money to publishers.

The ERC is Europe’s most prestigious research funder, and a main backer of Plan S. Without Plan S, the ERC has no open access strategy, and without the backing of ERC, Coalition S may not be big enough for succeeding. Why would ERC make this U-turn? I do not know, but let me gather a few potentially relevant pieces of the puzzle. The pieces are of three types:
  • some context on the ERC and more generally on Europe’s research plans,
  • the recently announced rights retention strategy by Coalition S,
  • ERC’s meager and not very credible justification for their withdrawal.