tag:blogger.com,1999:blog-91197930028200726452023-05-17T02:24:54.359-07:00Research Practices and ToolsSylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.comBlogger76125tag:blogger.com,1999:blog-9119793002820072645.post-141220506142893102023-04-21T13:02:00.005-07:002023-04-21T13:06:53.610-07:00An environmental boycott of Elsevier<p>Since 2012, thousands of academics have been <a href="https://en.wikipedia.org/wiki/The_Cost_of_Knowledge">boycotting</a> the academic publisher <a href="https://en.wikipedia.org/wiki/Elsevier">Elsevier</a>, whom they blame for overpricing its journals, and more generally for resisting open access to the scientific literature. Of course, most major academic publishers are guilty of the same, but Elsevier stands out as the worst offender. For instance, Elsevier was the last major publisher to join the <a href="https://en.wikipedia.org/wiki/Initiative_for_Open_Citations">Initiative for Open citations</a>, years after all the others. Elsevier did not join the <a href="https://i4oa.org/">Initiative for open abstracts</a>, and they play a leading role in the legal persecution of <a href="https://en.wikipedia.org/wiki/Sci-Hub">Sci-Hub</a>.</p><p>In February 2022, Elsevier was <a href="https://www.theguardian.com/environment/2022/feb/24/elsevier-publishing-climate-science-fossil-fuels">denounced</a> for helping the fossil fuel industry via its publishing and consulting activities. Again, other publishers are doing that too, and Elsevier is only the worst (or more prominent) offender. This has led to a (no longer active) <a href="https://www.ucsusa.org/about/news/science-groups-launch-petition-urging-journal-publisher-share-plan-halting-anti-climate">petition</a> by the Union of Concerned Scientists. Elsevier was not impressed, and now there is a campaign called <a href="https://stopelsevier.wordpress.com/">Stop Elsevier</a>.</p><p>This campaign includes a boycott. Participants may commit to</p><ul><li><p>Refuse to Review</p></li><li><p>Refuse to Submit</p></li><li><p>Write to Editors</p></li><li><p>Refuse to Edit</p></li><li><p>Take Direct Action</p></li><li><p>Share Boycott on Social Media</p></li></ul><p>There are also various options for allowing these commitments to be publicly shared, as in the original <a href="http://thecostofknowledge.com/">Cost of Knowledge boycott</a>. For the moment, they are still kept private, though.</p><p>The new boycott won’t sink Elsevier by itself, but it could strengthen its well-deserved reputation for unprincipled greed. This would help discourage academic institutions from doing business with Elsevier. </p>Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-38234814817884767412022-12-29T00:55:00.001-08:002022-12-29T00:55:05.564-08:00Hypocrites in the air, and Barnabas Calder<p>Since travel by plane is one of the main sources of carbon emissions by researchers, climate scientists who take the plane have been called <a href="https://kevinanderson.info/blog/hypocrites-in-the-air-should-climate-change-academics-lead-by-example/">hypocrites in the air</a>. The expression could be applied to many other researchers, who worry about climate change (without necessarily working on the subject), but fly much more than is really needed.</p><p>But what is “really needed” plane travel for a researcher? Surely, we could eliminate much plane travel without compromising our work, by renouncing useless or marginally useful meetings, conferences or visits. Videoconferencing can be helpful: it is often profitable to save the time and expense of travelling, by accepting some loss in communication quality. However, if we take the climate seriously, this cannot be enough. Emissions need not just be halved, but be brought close to zero, and quickly.</p><p>Taking the climate seriously implies reducing emissions even when this is detrimental to research – or other activities. Some researchers are already doing it. Barnabas Calder, author of <i>Architecture: from prehistory to climate emergency</i>, explains in his introduction that he did not visit many of the buildings he discusses:</p><blockquote><p>Parts of the book might have been better with first-hand experience of the buildings, but in a world that can ill afford the carbon burden of jet-fuelled travel, the move towards sustainable energy consumption will involve bigger and tougher compromises than this.</p></blockquote><p>Unlike most research subjects, Calder’s work is relevant to addressing climate change, and yet he accepts the handicap of travel restrictions. How does one call the opposite of a hypocrite in the air? A honest scientist on the ground?</p>Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-48256776357638478162022-11-24T10:48:00.000-08:002022-11-24T10:48:44.956-08:00No more rejections: eLife reinvents the academic journal<h4 id="academic-journals-as-gatekeepers">Academic journals as gatekeepers</h4><p>Nowadays, the role of academic journals is not so much disseminating information, as choosing which submitted articles to accept or to reject for publication. Gatekeeping was surely important when journals were actually printed, in order to save paper and shelf space. Nowadays, it serves as a token of quality, which can make or break the careers of the articles’ authors.</p><p>The journal’s publishing decisions are based on the peer review process, which involves the work of reviewers and editors. The process can be fair — or biased, it can be rigorous — or cursory, it can be constructive — or confrontational. How would we know, when the result is only one bit of publicly available information? Not even one bit actually, since when an article is rejected, we do not even know that it was submitted.</p><p>In good cases, the process can in fact yield more useful information: because it results in the article being improved, and/or because the reviewers’ reports are made public. However, this generally does not occur when the article is rejected, even in best-practices-in-their-field journals such as SciPost.</p><h4 id="elifes-new-model">eLife’s new model</h4><p>To avoid wasting the reviewers’ work, and to make it impossible to judge articles by the journals that publishes them, the obvious solution is to eliminate accept/reject decisions. This is the basis of eLife’s new model, <a href="https://elifesciences.org/inside-elife/54d63486/elife-s-new-model-changing-the-way-you-share-your-research">announced here</a>. As with most radical progress, the challenge is to succeed in spite of the system’s corruption and its perverse incentives. In eLife’s case, this requires that the resulting articles count as publications for whoever manages the authors’ careers. To this end, eLife is assigning DOIs, and the authors can designate an official Version of Record.</p><p>Will eLife will thrive or perish? Will its new model be emulated? Will much-needed systemic change follow? Hard to tell. What I can do for the moment is to discuss the model’s logic in more detail, and point its strengths and weaknesses.</p><h4 id="the-editor-is-back">The editor is back</h4><p>A striking feature of the new model is the crucial role of editors. In old times, editors used to run their journals without help from reviewers — <a href="https://physicstoday.scitation.org/doi/10.1063/1.2117822">Einstein was outraged</a> when presented with a report from an anonymous reviewer in 1936, for the first time in his career. Nowadays, editors delegate much work and responsibility to reviewers, who are often asked to provide explicit recommendations on whether to accept or reject submitted articles.</p><p>In eLife’s new model, reviewers still write reports, but they can focus on giving their scientific opinion without worrying about journal’s publication criteria. Editors now make important decisions:</p><ul><li><p>Performing an initial screening to decide whether the submitted article deserves consideration, or should be desk-rejected. Given the large numbers of articles that are written nowadays, some sort of screening is unavoidable.</p></li><li><p>After peer review has been done, rating the article for significance and strength of support.</p></li></ul><p>The purpose of the rating system is to help readers quickly decide whether the article is worthy of trust and attention. Crucially, ratings are not optional and delegated to reviewers as in SciPost: they are done systematically, and involve the editors. This will hopefully allow the ratings to be done consistently and meaningfully, by people who are well-placed to compare many articles. To evaluate the quality and interest of an article, explicit ratings are potentially much better than a crude accept/reject decision.</p><p>This system requires a lot of work and commitment from editors: this could well be a weakness. However, not rejecting articles and doing all the review work in public can certainly save a lot of the editors’ and reviewers’ time, if not at eLife in particular, at least at a systemic level.</p><h4 id="more-details-on-the-rating-system">More details on the rating system</h4><p>The rating system or “eLife assessment” system is <a href="https://elifesciences.org/inside-elife/db24dd46/elife-s-new-model-what-is-an-elife-assessment">described here</a>:</p><ul><li><p><span class="math inline">5</span> ratings for significance of findings, from useful to landmark.</p></li><li><p><span class="math inline">6</span> ratings for strength of support, from inadequate to exceptional.</p></li></ul><p>These ratings are supposed to be embedded in a short text that summarizes the editors and reviewers’ opinion of the article. Having standard ratings promises to allow easy comparison between articles, and should be very valuable to readers. At the same time, it makes it possible to do statistics and build metrics, with all their potential for misuse. These metrics would however be directly based on an explicit evaluation of the article, and could therefore be more meaningful than citation numbers, let alone journal impact factors.</p><p>This rating system is not far from what I have been proposing <a href="http://researchpracticesandtools.blogspot.com/2014/03/rating-scientific-articles-why-and-how.html">in general</a> and <a href="http://researchpracticesandtools.blogspot.com/2020/03/with-open-peer-review-open-is-just.html">in particular</a>. Possible improvements to the eLife system could include letting authors suggest ratings for their own articles, and adding a rating for clarity i.e. quality of writing and presentation.</p><h4 id="reactions">Reactions</h4><ul><li><p><a href="https://kamounlab.medium.com/will-the-academic-prisoners-dilemma-impact-elife-2-0-bcb0c2b7c279">Thoughtful blog post</a> by Sophien Kamoun.</p></li><li><p><a href="https://www.timeshighereducation.com/opinion/destroying-elifes-reputation-selectivity-does-not-serve-science">The usual criticism</a> that researchers have to play by the rules of the system, and would commit suicide by participating in eLife’s new model. By this logic, eLife should have been content to perform good quality open peer review: attempts at further improvement are doomed.</p></li><li><p><a href="https://www.hhmi.org/news/hhmi-statement-support-elife-and-open-science-innovation">HHMI’s enthusiastic support</a>. After <a href="https://en.wikipedia.org/wiki/Plan_S">Plan S</a>, another case of funders pushing researchers towards good practices.</p></li></ul>Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-26544139584383150462022-10-04T14:12:00.000-07:002022-10-04T14:12:52.620-07:00Suprisingly good CNRS guidelines on open access<p>When it comes to good practices, research institutions are often good at declarations of principles, and not so good at implementation. For example, it is easy to <a href="https://en.wikipedia.org/wiki/San_Francisco_Declaration_on_Research_Assessment">declare</a> that research assessment should be qualitative and not rely too much on bibliometrics, but <a href="https://researchpracticesandtools.blogspot.com/2021/12/reforming-research-assessment-nice.html">harder to do it in practice</a>.</p><p>This is why I am pleasantly surprised by recent CNRS guidelines on open access. These guidelines take the form of a letter to CNRS researchers who have to fill yearly reports on their work. </p><p> </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJicNHII-7E6rbXSEj3SDpGHpyqru3QhloU9_GYZFFOQt2eHYhbt2Yzsm7TM-T5lIu9JuI9EZxY7a-I1eg25bcSPNc1KrWhLLHUzn_CrcY_h5L2VqfxHpl5kts-Qq3EYnfbIl4Th9TqAx4IlW0rckvroAXvDmUnEi2hNuadtxgz3ejsK-3EiaxHlL1lw/s2339/Lettre_DGDS_CRAC_2022_10_04-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="2339" data-original-width="1654" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJicNHII-7E6rbXSEj3SDpGHpyqru3QhloU9_GYZFFOQt2eHYhbt2Yzsm7TM-T5lIu9JuI9EZxY7a-I1eg25bcSPNc1KrWhLLHUzn_CrcY_h5L2VqfxHpl5kts-Qq3EYnfbIl4Th9TqAx4IlW0rckvroAXvDmUnEi2hNuadtxgz3ejsK-3EiaxHlL1lw/s320/Lettre_DGDS_CRAC_2022_10_04-1.png" width="226" /></a><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfdWiA9WW4XcdeENjrXl_WvfwE_vNWLWtEJHqwd2dlJoA0OYPvVGhXJj38_kw0u_FDLXnreRuid7Kf_T3wd8N2Cdl0biOT7DPuBsojTMva5u8h_qDpGIE1FxYxQ1flM_t8pwGXwdL5fkjn-EdQxKOlPAtElM69NXyulr0tOiUpuCAj1RMpamPv3BJhTg/s2339/Lettre_DGDS_CRAC_2022_10_04-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="2339" data-original-width="1654" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfdWiA9WW4XcdeENjrXl_WvfwE_vNWLWtEJHqwd2dlJoA0OYPvVGhXJj38_kw0u_FDLXnreRuid7Kf_T3wd8N2Cdl0biOT7DPuBsojTMva5u8h_qDpGIE1FxYxQ1flM_t8pwGXwdL5fkjn-EdQxKOlPAtElM69NXyulr0tOiUpuCAj1RMpamPv3BJhTg/s320/Lettre_DGDS_CRAC_2022_10_04-2.png" width="226" /></a></div><br /></div><br /><p></p><p>The salient points about open access are:</p><ul><li><p>Over the last three years, the proportion of articles by CNRS researchers that are openly accessible rose from 49% to 77%. (OK, these data mean little in the absence of more details.)</p></li><li><p>CNRS is adopting a rights retention strategy, as <a href="https://researchpracticesandtools.blogspot.com/2021/01/open-access-by-decree-success-of-plan-s.html">proposed by Coalition S</a>: it recommends that all articles be distributed under a CC-BY license. In particular, this allows the articles to be made openly accessible right at the time of publication.</p></li><li><p>CNRS is not asking researchers for their lists of publications: instead, CNRS just takes publication lists from the national preprint archive HAL.</p></li><li><p>The weak point of all this seems to be the impractical and clunky nature of HAL. However, as reminded in the letter, HAL is increasingly interoperable with disciplinary archives such as ArXiv and BiorXiv. And indeed, my recent articles were automatically harvested by HAL from ArXiv.</p></li></ul><p>This goes in the direction of having a strong open access mandate while requiring no extra work from researchers. To get there, it remains to make the CC-BY license mandatory, and the upload of articles to HAL fully automatic.</p>Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-6735891247301882272022-03-16T15:57:00.003-07:002022-03-16T15:58:52.528-07:00The war in Ukraine: directives to French researchers from CEA and CNRS<p>My <a href="https://researchpracticesandtools.blogspot.com/2022/03/the-war-in-ukraine-letter-from-maxim.html">previous post</a> reproduced a letter from Maxim Chernodub, suggesting how French scientists could help Ukrainian colleagues, and also calling French scientists to <i>not</i> boycott Russian collaborators. However, it seems that we will not have much choice in the matter, at least if we follow the official directives, which I will paraphrase below.<br /></p><h3 style="text-align: left;">Directives from <a href="https://en.wikipedia.org/wiki/French_Alternative_Energies_and_Atomic_Energy_Commission">CEA</a> (received by email)<br /></h3><div style="text-align: left;">Russian scientists may remotely attend online conferences, but only as individual experts, i.e. provided they do not represent an institution. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">Already submitted publications are OK. But it is forbidden to submit a new publication with a coauthor who is affiliated in Russia. </div><div style="text-align: left;"><br /></div><div style="text-align: left;"><h3 style="text-align: left;"><a href="https://www.cnrs.fr/fr/le-cnrs-suspend-toutes-nouvelles-formes-de-collaborations-scientifiques-avec-la-russie">Press release</a> from <a href="https://en.wikipedia.org/wiki/French_National_Centre_for_Scientific_Research">CNRS</a></h3><div style="text-align: left;">CNRS is suspending all new scientific collaboration with Russia. Russian researchers who work in France may continue their activities. </div><div style="text-align: left;"><br /></div><div style="text-align: left;"><h3 style="text-align: left;">Interpretation</h3><div style="text-align: left;">From CNRS we only have a press release so far. The precise directives, when they come, may well look like CEA's. It is unclear how these directives will be enforced. Most scientific journals <a href="https://www.nature.com/articles/d41586-022-00718-y">still allow</a> submissions from Russia-based authors. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">As far as I can tell, such drastic measures against Russian scientists are unprecedented. No similar measures were taken in other cases of human right abuses, including wars of aggression. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">For researchers who want to keep collaborating with Russians without violating the official directives, technical loopholes might help: the directive from CEA is about <i>submitting publications</i>, but preprints are not publications, right? Co-authors may not be <i>affiliated in Russia</i>, but could they publish as private individuals, without citing an affiliation? <br /></div></div></div>Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-81157424617892700112022-03-16T15:21:00.000-07:002022-03-16T15:22:07.722-07:00The war in Ukraine: a letter from Maxim Chernodub<p><i>Since the beginning of the Russian invasion of Ukraine, Western researchers have been wondering how to help our Ukrainian colleagues, and how to behave with our Russian colleagues. A letter by French-Russian-Ukrainian physicist <a href="https://en.wikipedia.org/wiki/Maxim_Chernodub">Maxim Chernodub</a> has been circulating, which offers valuable perspective and advice on these issues. (It was written on 27/02/2022.) Below is the text of the letter, reproduced with the author's permission. </i></p><p><i><span></span></i></p><a name='more'></a><p></p><p>Dear Friends,</p><p>I am so happy to have so many current collaborators that I'm
sorry to say that I cannot write to everybody personally. Sorry for my
scientific inactivity in the recent weeks; I'll also be off science next
week.<br />
<br />
We all don't like lengthy messages, so I tried to be as brief
as possible (but it did not work, in fact). I skip trivial but
powerful statements such as "Science should unite people," etc. We all
know them well. Instead, I would like to ask for some coherent
near-scientific effort.<br />
<br />
To understand the background, I need to tell a bit more about
myself (my CV on my institute page is well-outdated):<br />
<br />
I'm a French-Russian-Ukrainian scientist:</p><ul style="text-align: left;"><li>born/raised in Ukraine; my mother language is an Eastern
dialect of Ukrainian;</li><li>graduated and worked in Moscow (particle physics); I have
two Russian degrees in science; now I'm a Leading researcher (at a
distance) in a Russian University; I co-lead a group of young people (quantum
field theory).</li><li>I'm permanently working at French CNRS (Directeur de
recherche = senior scientist in the French system); head of a modest Field
theory group at the Institut Denis Poisson, Tours-Orléans. <br /></li></ul><p>
<br />
My parents of mixed Ukrainian/Russian origin refused to leave
Kyiv for safe France weeks ago, although it was very easy (I would make
the same decision if I were at their side). Now Kyiv is encircled; my
parents help make final preparations in a bomb shelter in Kyiv. We
expect an assault soon, in a matter of hours or days.<br />
<br />
My brother, an IT specialist and an assistant Professor at a
University in Lviv (in the western part of Ukraine), was not enlisted in
the army due to health issues. From the beginning of this week, he
stays in nearby Poland with his family and continues to work at a
distance. It was a reasonable solution: yesterday, a building in his block
in Kyiv was heavily damaged by a shell or rocket.<br />
<br />
I was born and spent my childhood in the Kharkiv region
located in the eastern part of Ukraine. Kharkiv is the city of Landau,
Lifshitz, Pomeranchuk (Pomeranchuk instability), Shubnikov (SdH
oscillations), Podolsky (EPR paradox), and many other prominent scientists.
Now, the city of Kharkiv is encircled and being stormed; many nearby
cities and towns -- with the names so familiar to me from childhood --
are in flames.<br />
<br />
I have friends and friends of friends all over Ukraine. They
share what they see with their eyes. Some places are easy to localize
("this burning armored vehicle is on the street near my friend's
apartment in a city in the Kharkiv region"; "these lifeless bodies of
soldiers lie close to a roundabout on my usual way to a subway station in
Kyiv, etc"). It's not fake news over social networks.<br />
<br />
I read news in Ukrainian, Russian, English, and French.<br />
<br />
So, I feel that I have a generic vision of what happened the
last few days and what is going on right now (and I have no idea what
awaits us in the future, unfortunately).<br />
<br />
Now, here is the main message (I try to keep it short):<br />
<br />
1) As a Ukrainian scientist, I would like to ask you: please
don't enforce or support the calls to boycott Russian collaborators.
These calls sound typically as follows (I cite certain current
discussions in the high-energy-physics community): "no single paper in
collaboration with Russians"; "freeze all our common projects with
Russians"; "ask Russian colleagues to fill a questionnaire whether they
approve what is happening or not, and then we decide on our collaboration",
etc).<br />
<br />
I must say that most (if not all) of our Russian colleagues
share our values and principles. They are as horrified as we are by
seeing what is happening in Ukraine. Please don't allow the narrow-minded administration-motivated colleagues to punish the Russian
scientists twice: both from their Russian government and from their
colleagues from abroad. We know that they are suffering a lot already.<br />
<br />
A peaceful protest in Russia is not easy (I do not cite any
examples here because this email is not a political message).
Therefore, we should praise our brave Russian colleagues who wrote and
signed this strongly-worded document against the War:<br />
<br />
<a href="https://www.eureporter.co/world/russia/2022/02/24/an-open-letter-from-russian-scientists-and-science-journalists-against-the-war-with-ukraine/" target="_blank">https://www.eureporter.co/world/russia/2022/02/24/an-open-letter-from-russian-scientists-and-science-journalists-against-the-war-with-ukraine/</a><br />
<br />
The Russian version contains now, as of 27/02/2022, more than 4000
signatures (according to the organizers, this list is far from complete,
so in reality it is bigger):<br />
<a href="https://trv-science.ru/2022/02/we-are-against-war/" target="_blank">https://trv-science.ru/2022/02/we-are-against-war/</a>.<br />
<br />
Please notice that I'm talking only about person-to-person or group-to-group collaborations with Russian colleagues and
Russian groups. (It is up to governments and administrations to decide
the fate of the inter-governmental and/or EU-supported programs.)<br />
<br />
→ this should be done now.<br />
<br />
2) As a Russian scientist, I would like to ask you to help
launch new programs with Ukrainian colleagues: emergent recovery
programs, mobility grants, mutual bi-lateral Ph.D. doctorships, organization of
joint conferences, and all that machinery we know well.<br />
<br />
You don't know who is working in Ukraine on your topic? Just
look at a publication tree, check around, ask colleagues. Be, please,
active. There are many talented young people in Ukrainian
Universities. The EU/US governments, of course, will try to support the recovery
of Ukrainian science. But the scale and overall success of this
program will depend on us, who will provide the person-to-person and group-to-group basis.<br />
<br />
Of course, No.2 depends on the outcome of the War in Ukraine
and the postwar details. Let's hope for a peaceful future.<br />
<br />
→ now, no active collaboration is possible but we should be
prepared for the future.<br />
<br />
3) As a French scientist, I will do my best to fulfil the both
points sketched above.<br />
<br />
Please, feel free to share these thoughts (or circulate the
letter itself) around you.<i> <br /></i></p>Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-26965258216983550872021-12-31T12:31:00.006-08:002021-12-31T12:34:18.333-08:00Reforming research assessment: nice declarations, little action?<p>There seems to be a consensus among universities and research funders that <b>research assessment should not be based on crude quantitative metrics</b>, such as: numbers of articles, numbers of citations, journal impact factors, the h-index, etc. The 2012 <a href="https://en.wikipedia.org/wiki/San_Francisco_Declaration_on_Research_Assessment">San Francisco Declaration on Research Assessment</a> (DORA) formulates principles which could greatly improve research assessment if they were applied, although I would argue that the DORA is <a href="https://researchpracticesandtools.blogspot.com/2013/08/write-for-humans-not-for-robots.html">misguided in its recommendations to authors</a>. The DORA has been signed by thousands of organizations: just for France this includes the Academy of Sciences, CNRS and HCERES. More recently, the European Commission has issued a report called <a href="https://op.europa.eu/en/publication-detail/-/publication/36ebb96c-50c5-11ec-91ac-01aa75ed71a1/language-en">Towards a reform of the research assessment system</a>, which deals with the same issues and promotes similar principles.</p><p>Since <b>the same principles have to be reiterated 9 years later</b>, you may think that little has changed in all that time. And you would be largely right. Significant reforms of research assessment in individual organizations are so rare that they are <a href="https://www.nature.com/articles/d41586-021-01759-5">newsworthy</a>. And some universities are denounced for taking actions that <a href="https://www.nature.com/articles/d41586-021-00793-7">directly contradict the principles they have officially endorsed</a>.<span></span></p><a name='more'></a><p></p><p>In the case of CNRS, the 2019 <a href="https://www.science-ouverte.cnrs.fr/en/">Roadmap for Open Science</a> states that “<b>providing a full and complete list of productions is unnecessary</b>”. However, the current form to be filled by candidates to permanent positions includes a complete list of productions. In addition, candidates are asked to provide the following statistics:</p><ul><li><p>Number of publications in peer-reviewed journals</p></li><li><p>Number of publications in peer-reviewed conference proceedings</p></li><li><p>Number of books or book chapters</p></li><li><p>Number of theses supervised</p></li><li><p>Number of theses co-supervised</p></li><li><p>Number of invited lectures in international scientific conferences</p></li><li><p>Number of patents</p></li></ul><p>If listing all publications is unnecessary, why would we count them? Hopefully, these statistics play little role in the eventual decisions: after all, the candidates also have to give qualitative information, including research programs. Nevertheless, the roadmap is clearly not reflected in current practice.</p><p>Faced with requests for such statistics, what should researchers do? The required numbers are ill-defined: in an “invited lecture in an international conference”, almost each word is ambiguous. Even the h-index depends on who computes it. The principled response is therefore to <b>ignore the requests</b>. One may be afraid to miss an opportunity for employment or promotion. However, would a good researcher really want to work for an employer whose decisions are based on meaningless statistics?</p>Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-69309264223083173322021-01-23T06:00:00.001-08:002021-01-23T06:03:39.850-08:00Open access by decree: a success of Plan S?<p>“<a href="https://www.nature.com/articles/d41586-021-00103-1">Science family of journals announces change to open-access policy</a>”: the title of this Nature news article may sound boring, but the news are not:</p><blockquote><p><b><i>Science</i> is allowing authors to distribute their articles under Creative Commons license, free of charge, without embargo.</b></p></blockquote><p>Sounds too good to be true? Well, of course, there are caveats. First, this is the author’s accepted version, not the published version. (OK, minor point.) Second, this is a trial policy, which may or may not be made permanent. And third, this only applies to authors who are mandated to do so by their Coalition S funders.</p><p>Who, you may ask, are these happy authors? I will defer to Wikipedia for reminders about <a href="https://en.wikipedia.org/wiki/Plan_S">Plan S and Coalition S</a>, and move to the next question: how did Coalition S achieve this? Last summer, Coalition S announced a <a href="https://www.coalition-s.org/rights-retention-strategy/">Rights Retention Strategy</a> that mandates fundees to distribute their work under CC license, without embargo, <i>no matter what the publisher makes them sign</i>. In their own words,</p><blockquote><p>This public licence and/or agreed prior obligations take legal precedence over any later Licence to Publish or Copyright Transfer Agreement that the publisher may ask the author to sign.</p></blockquote><p>This type of “open access by decree” may sound like little more than an exercise in blame-shifting. To an author who is caught between a funder and a journal’s incompatible demands, it should not matter who blacklists the other: the author cannot publish in that journal. However, Plan S <a href="https://researchpracticesandtools.blogspot.com/2018/11/how-strong-are-objections-to-plan-s.html">got bad press among some researchers</a> for appearing to blacklist journals. Now it is the <b>journals who will have to blacklist funders</b>, and to reject submitted articles for technicalities.</p><p>The success of the maneuver depends on publishers’ reactions. Coalition S is too big to ignore, so publishers have to take a stand. <i>Nature</i> journals have recently announced their <a href="https://group.springernature.com/cn/group/media/press-releases/springer-nature-announces-gold-oa-options-for-nature-journals/18614608">adoption of Gold open access options</a>, for a charge of the order of <b>9.500 euros per article</b>. This was denounced as <a href="http://bjoern.brembs.net/2020/11/are-natures-apcs-outrageous-or-very-attractive/">outrageously expensive</a>. Björn Brembs even took it as a <a href="http://bjoern.brembs.net/2020/12/high-apcs-are-a-feature-not-a-bug/">refutation of the widespread idea</a> that the author-pays model would lead to lower costs.</p><p>However, Coalition S-funded authors can now choose between publishing in <i>Science</i> for free, or in <i>Nature</i> for 9.500 euros. It does seem that <i>Science</i> is competing on price! Not so fast: <i>Nature</i>’s high price is partly an illusion, as it comes in the context of <a href="https://www.springernature.com/gp/open-research/institutional-agreements/oaforgermany">transformative agreements</a>, which are supposed to transition Springer Nature’s journals to open access. The idea is that academic institutions’ journal subscriptions would also cover open access publishing of their researcher’s articles. For an author who is covered by a transformative agreement, publishing open access in <i>Nature</i> is effectively free, which may be why <i>Science</i> had to offer the same for free, just to stay competitive.</p><p>At this stage, it is not clear which authors will face which choices of open access options and pricing. It is therefore a bit early for seeing the effects of the Rights Retention Strategy. At least, we now have the admission by an important publisher that <b>open access is not an extra service that should bring them extra revenue</b>.</p>Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-5629344115071453872020-09-04T07:42:00.001-07:002020-09-08T04:33:18.998-07:00Does this covariant function belong to some 2d CFT?<p>In conformal field theory, correlation functions of primary fields are covariant functions of the fields’ positions. For example, in two dimensions, a correlation function of <span class="math inline"><i>N</i></span> diagonal primary fields must be such that <br /><span class="math display">$$\begin{aligned}
F(z_1,z_2,\cdots , z_N) = \prod_{j=1}^N {|cz_j+d|^{-4\Delta_j}} F\left(\tfrac{az_1+b}{cz_1+d},\tfrac{az_2+b}{cz_2+d},\cdots , \tfrac{az_N+b}{cz_N+d}\right) \ , \end{aligned}$$</span><br /> where <span class="math inline"><i>z</i><sub><i>j</i></sub> ∈ ℂ</span> are the fields’ positions, <span class="math inline"><i>Δ</i><sub><i>j</i></sub> ∈ ℂ</span> their conformal dimensions, and <span class="math inline">$\left(\begin{smallmatrix} a& b \\ c& d \end{smallmatrix}\right)\in SL_2(\mathbb{C})$</span> is a global conformal transformation. In addition, there are nontrivial relations between different correlation functions, such as crossing symmetry. But given just one covariant function, do we know whether it belongs to a CFT, and what can we say about that CFT?</p><p>In particular, in two dimensions, do we know whether the putative CFT has local conformal symmetry, and if so what is the Virasoro algebra’s central charge?</p><p>Since covariance completely fixes three-point functions up to an overall constant, we will focus on four-point functions i.e. <span class="math inline"><i>N</i> = 4</span>. The stimulus for addressing these questions came from the <a href="https://arxiv.org/abs/1912.00973">correlation functions in the Brownian loop soup</a>, recently computed by Camia, Foit, Gandolfi and Kleban. (Let me thank the authors for interesting correspondence, and Raoul Santachiara for bringing their article to my attention.) <br /></p><h4 id="doesnt-any-covariant-function-belong-to-multiple-2d-cfts">Doesn’t any covariant function belong to multiple 2d CFTs?</h4><p>In conformal field theory, any correlation function can be written as a linear combination of <span class="math inline"><i>s</i></span>-channel conformal blocks. These conformal blocks are a particular basis of smooth covariant functions, labelled by a conformal dimension and a conformal spin. (I will not try to say preciely what smooth means.) In two dimensions, we actually have a family of bases, parametrized by the central charge <span class="math inline"><i>c</i></span>, with the limit <span class="math inline"><i>c</i> = ∞</span> corresponding to global conformal symmetry rather than local conformal symmetry.<span></span></p><a name='more'></a><p></p><p>Therefore, any smooth covariant function can be written as a linear combination of blocks, for any value of the central charge <span class="math inline"><i>c</i></span>. In this sense, we can probably find some 2d CFT where our function can be interpreted as a correlation function, since we have complete freedom to cook up the CFT’s other correlation functions in order to satisfy crossing symmetry.</p><p>This is a priori not terribly interesting: in the absence of more constraints, the space of consistent CFTs is vast, and presumably includes an unchartered jungle of atrociously complicated creatures. To obtain meaningful results, we have to look for decompositions into conformal blocks that obey more constraints.</p><h4 id="the-importance-of-being-simple-the-case-of-minimal-models">The importance of being simple: the case of minimal models</h4><p>Let us consider a four-point function in a minimal model of 2d CFT. This can be decomposed into finitely many conformal blocks with the true central charge <span class="math inline"><i>c</i></span> of the model, schematically <br /><span class="math display">$$\begin{aligned}
F(z_j) = \sum_{i=1}^n D_i \mathcal{F}^{c}_{\Delta_i}(z_j)\mathcal F^c_{\bar \Delta_i}(\bar z_j) \ , \end{aligned}$$</span><br /> where <span class="math inline"><i>D</i><sub><i>i</i></sub></span> are position-independent structure constants, <span class="math inline">ℱ<sub><i>Δ</i></sub><sup><i>c</i></sup>(<i>z</i><sub><i>j</i></sub>)</span> is a <a href="https://en.wikipedia.org/wiki/Virasoro_conformal_block">Virasoro conformal block</a>, and <span class="math inline">$\Delta_i,\bar \Delta_i$</span> are left- and right-moving conformal dimensions, with <span class="math inline">$\Delta_i=\bar \Delta_i$</span> if the model is diagonal.</p><p>What happens if we insist on rewriting the same function in terms of conformal blocks with another central charge <span class="math inline"><i>c</i>′≠<i>c</i></span>? Up to simple prefactors, the Virasoro conformal block is a function of the cross-ratio <span class="math inline">$z=\frac{z_{12}z_{34}}{z_{13}z_{24}}$</span> such that <br /><span class="math display">$$\begin{aligned}
\mathcal{F}^c_\Delta(z) = z^{\Delta - \Delta_1-\Delta_2}\sum_{k=0}^\infty \alpha_k z^k \ , \end{aligned}$$</span><br /> where <span class="math inline"><i>α</i><sub>0</sub> = 1</span>, and the coefficients <span class="math inline"><i>α</i><sub>1</sub>, <i>α</i><sub>2</sub>, ⋯</span> are functions of <span class="math inline"><i>c</i>, <i>Δ</i></span> and <span class="math inline"><i>Δ</i><sub>1</sub>, <i>Δ</i><sub>2</sub>, <i>Δ</i><sub>3</sub>, <i>Δ</i><sub>4</sub></span>. Therefore, we have relations of the type <br /><span class="math display">$$\begin{aligned}
\mathcal{F}^c_\Delta(z) = \sum_{m=0}^\infty f_m \mathcal{F}^{c'}_{\Delta+m}(z)\ ,\end{aligned}$$</span><br /> where the coefficients <span class="math inline"><i>f</i><sub><i>m</i></sub></span> depend on <span class="math inline"><i>c</i>, <i>c</i>′,<i>Δ</i></span> and <span class="math inline"><i>Δ</i><sub>1</sub>, <i>Δ</i><sub>2</sub>, <i>Δ</i><sub>3</sub>, <i>Δ</i><sub>4</sub></span>. Therefore, the decomposition of our correlation function into conformal blocks becomes an infinite sum. The correct central charge <span class="math inline"><i>c</i></span> is singled out as the only value that leads to a finite decomposition.</p><p>In the case of minimal models, we actually do not need to go that far for finding the correct central charge. The allowed conformal dimensions <span class="math inline"><i>Δ</i><sub>1</sub>, <i>Δ</i><sub>2</sub>, <i>Δ</i><sub>3</sub>, <i>Δ</i><sub>4</sub></span> indeed belong to a finite set that depends on <span class="math inline"><i>c</i></span>, called the Kac table. Moreover, correlation function obey Belavin-Polyakov-Zamolodchikov partial differential equations, which also betray the value of <span class="math inline"><i>c</i></span>. But the simplicity of the decomposition into conformal blocks is a criterion that also works in more general cases.</p><h4 id="simple-but-not-minimal-the-case-of-liouville-theory">Simple but not minimal: the case of Liouville theory</h4><p>Liouville theory is a solved 2d CFT with a continuous spectrum, so there is no Kac table for quickly finding the central charge. Four-point functions are of the type <br /><span class="math display">$$\begin{aligned}
F(z_j) = \int_{\frac{c-1}{24}}^\infty d\Delta\ D_\Delta \mathcal{F}^c_{\Delta}(z_j)\mathcal{F}^c_{\Delta}(\bar z_j)\ .\end{aligned}$$</span><br /> We may be tempted to deduce the value of <span class="math inline"><i>c</i></span> from the lower bound of integration of <span class="math inline"><i>Δ</i></span>. In terms of the cross-ratio <span class="math inline"><i>z</i></span>, we have <br /><span class="math display">$$\begin{aligned}
\frac{c-1}{24} = \Delta_1+\Delta_2 +\lim_{z\to 0}\frac{\log F(z)}{2\log |z|}\ .\end{aligned}$$</span><br /> However, this requires that we a priori know the spectrum of Liouville theory, with its lower bound on <span class="math inline"><i>Δ</i></span>. Moreover, in case we are determining <span class="math inline"><i>c</i></span> numerically, the convergence is slow, and the factor <span class="math inline">$\frac{1}{24}$</span> further degrades the precision.</p><p>In this example too, let us see what happens if we rewrite the four-point function in terms of blocks for a wrong central charge <span class="math inline"><i>c</i>′</span>: <br /><span class="math display">$$\begin{aligned}
F(z_j) = \int_{\frac{c-1}{24}}^\infty d\Delta\ D_\Delta \sum_{m,\bar m=0}^\infty f_m f_{\bar m} \mathcal{F}^{c'}_{\Delta+m}(z_j) \mathcal{F}^{c'}_{\Delta+\bar m}(\bar z_j)\ .\end{aligned}$$</span><br /> Rewriting this as a sum over the spin <span class="math inline">$s=m-\bar m$</span> and <span class="math inline">$\Delta'=\Delta+m+\bar m$</span>, we obtain a decomposition of the type <br /><span class="math display">$$\begin{aligned}
F(z_j) = \int_{\frac{c-1}{24}}^\infty d\Delta' \sum_{s\in\mathbb{Z}} D'_{\Delta', s} \mathcal{F}^{c'}_{\Delta'+\frac{s}{2}}(z_j) \mathcal{F}^{c'}_{\Delta'-\frac{s}{2}}(\bar z_j)\ ,\end{aligned}$$</span><br /> for some structure constants <span class="math inline"><i>D</i>′<sub><i>Δ</i>′,<i>s</i></sub></span> which are combinations of <span class="math inline"><i>D</i><sub><i>Δ</i></sub></span> and <span class="math inline"><i>f</i><sub><i>m</i></sub></span>. With the true central charge <span class="math inline"><i>c</i></span> we had a diagonal decomposition, with the same dimension <span class="math inline"><i>Δ</i></span> for the left- and right-moving conformal blocks. With a wrong central charge <span class="math inline"><i>c</i>′</span> we now have arbitrary integer spins <span class="math inline"><i>s</i> ∈ ℤ</span>, and therefore a more complicated decomposition. Just like in the case of minimal models, the true central charge is singled out by the simplicity of the decomposition.</p><h4 id="families-of-covariant-functions">Families of covariant functions</h4><p>Let us now consider a family of covariant functions, with dimensions <span class="math inline"><i>Δ</i><sub>1</sub>, <i>Δ</i><sub>2</sub>, <i>Δ</i><sub>3</sub>, <i>Δ</i><sub>4</sub></span> that are arbitrary complex numbers instead of having specific values. (This is almost the case in the <a href="https://arxiv.org/abs/1912.00973">article by Camia et al</a>, whose four dimensions are subject to only one constraint.) We can now investigate whether four-point structure constants factorize. In the example of Liouville theory, this factorization reads <br /><span class="math display">$$\begin{aligned}
D_\Delta(\Delta_1,\Delta_2,\Delta_3,\Delta_4) = C_\Delta(\Delta_1,\Delta_2) C_\Delta(\Delta_3,\Delta_4)\ ,\end{aligned}$$</span><br /> where <span class="math inline"><i>C</i><sub><i>Δ</i></sub>(<i>Δ</i><sub>1</sub>, <i>Δ</i><sub>2</sub>)</span> is a three-point structure constant. This factorization has no reason to hold in the case of arbitrary covariant functions, so does it allow us to distinguish CFT fron non-CFT?</p><p>Not really: factorization holds only provided the CFT’s spectrum contains only one copy of each representation of the Virasoro algebra. In more complicated situations, the states are characterized not only by the conformal dimension <span class="math inline"><i>Δ</i></span>, but also by an additional multiplicity parameter <span class="math inline"><i>μ</i></span>, and the decomposition of four-point structure constants into three-point structure constants reads <br /><span class="math display">$$\begin{aligned}
D_\Delta(\Delta_1,\Delta_2,\Delta_3,\Delta_4) = \sum_\mu C_\Delta^\mu(\Delta_1,\Delta_2) C_\Delta^\mu(\Delta_3,\Delta_4)\ .\end{aligned}$$</span><br /> If we do not impose restrictions on the values of <span class="math inline"><i>μ</i></span>, four-point structure constants can always be decomposed in this way, and any family of covariant functions belongs to some 2d CFT, for any value of the central charge. However, when varying the central charge, we may find a value such that the four-point structure constants factorize, or decompose into particularly small numbers of terms.</p><h4 id="higher-symmetry-algebras">Higher symmetry algebras</h4><p>If no value of the central charge leads to a simple decomposition of our covariant function, it is not yet time to despair: we may just be considering the wrong symmetry algebra. We have been decomposing covariant functions into Virasoro conformal blocks, but this has no reason to be meaningful if the true symmetry algebra is larger than the Virasoro algebra. Larger algebras that are often encountered include affine Lie algebras, W-algebras, or several copies of the Virasoro algebra.</p><p>In practice, using the wrong symmetry algebra is similar to using the wrong central charge, and leads to overly complicated decompositions. Moreover, an irreducible representation of a larger symmetry algebra typically contains many representations of the Virasoro algebra, whose conformal dimensions differ by integers. This leads to nontrivial multiplicity parameters <span class="math inline"><i>μ</i></span>.</p><h4 id="conclusion">Conclusion</h4><p>Given a covariant function, the question of whether it belongs to some CFT is probably meaningless. The interesting problem is rather to look for a central charge and symmetry algebra such that the decomposition into conformal blocks is simple. Depending on the case, a spectrum may be considered simple if it is finite, or diagonal, or in any case smaller than what we would get for generic central charges. If this succeeds, there is a chance that our covariant function belongs to a CFT that is interesting and meaningful.</p><p>Coming back to the <a href="https://arxiv.org/abs/1912.00973">article by Camia et al</a>, the true central charge is a priori known, but the decomposition into Virasoro conformal blocks for this central charge does not look particularly simple. The four-point structure constants do not factorize unless we introduce nontrivial multiplicities, and the spectrum contains many conformal dimensions that differ by integers. So the four-point functions can probably not be understood in terms of the Virasoro algebra. Maybe one should look for a larger symmetry algebra.</p>Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-80581606087956142642020-08-27T12:29:00.001-07:002020-08-27T12:36:07.097-07:00(2/2) Open access mystery: why did ERC backstab Plan S?<p>In my <a href="http://researchpracticesandtools.blogspot.com/2020/07/open-access-mystery-why-did-erc.html">first post</a> about the ERC’s recent withdrawal from supporting Plan S, I tried to explain ERC’s announcement using publicly available information on the ERC, Plan S, and their recent news. The potential dangers of this approach were to miss relevant pieces of information, and to give too much weight to calendar coincidences.</p><p>We are still waiting for a detailed and convincing explanation from the ERC, and for a description of their open access strategy if they still have one. Meanwhile, I would like to complete the picture based on informal contacts with a few well-informed colleagues. There emerge two potential lines of explanation.<span></span></p><a name='more'></a><p></p><h3 id="erc-and-plan-s-the-two-body-problem">ERC and Plan S: the two-body problem</h3><p>Rather than a substantial disagreement, the problem could simply be a power struggle over who takes decisions: how much influence the ERC has on Plan S, how much decision-making the ERC is willing to delegate.</p><h3 id="the-influence-of-scientific-societies">The influence of scientific societies</h3><p>Scientific societies such as the American Chemical Society are often also scientific publishers, and much of their revenues could be threatened by a transition to open access. Some scientific societies have been <a href="https://axial.acs.org/2019/02/12/american-chemical-society-responds-to-plan-s/">very critical of Plan S</a> from the start. The ERC scientific council claims that it is especially motivated by the needs of young researchers, but that council is made of senior scientists, who might be more sensitive to the needs of scientific societies.</p>Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-84698752068549430232020-07-22T15:58:00.001-07:002020-08-27T12:33:46.672-07:00(1/2) Open access mystery: why did the ERC backstab Plan S?The European Research Council (ERC) <a href="https://erc.europa.eu/news/erc-scientific-council-calls-open-access-plans-respect-researchers-needs">just announced</a> that they would withdraw their support for Coalition S, the consortium of research funders behind Plan S. Plan S is the valiant but <a href="http://researchpracticesandtools.blogspot.com/2018/11/how-strong-are-objections-to-plan-s.html">not universally welcome</a> attempt to impose strong open access requirements to research articles, without paying more money to publishers.<br />
<br />
The ERC is Europe’s most prestigious research funder, and a main backer of Plan S. Without Plan S, the ERC has no open access strategy, and without the backing of ERC, Coalition S may not be big enough for succeeding. Why would ERC make this U-turn? I do not know, but let me gather a few potentially relevant pieces of the puzzle. The pieces are of three types:<br />
<ul>
<li>some context on the ERC and more generally on Europe’s research plans,</li>
<li>the recently announced rights retention strategy by Coalition S,</li>
<li>ERC’s meager and not very credible justification for their withdrawal.<br />
<a name='more'></a></li>
</ul>
<h3 id="a-weakened-erc-in-a-less-ambitious-european-union">
A weakened ERC in a less ambitious European Union</h3>
<ul>
<li>In the current European commission, research was originally supposed to depend on a commissionner for <a href="https://www.sciencemag.org/news/2019/09/eu-research-commissioner-named-lacks-research-her-title">Innovation and youth</a>. Then there were protests, and the word research was reinstated.</li>
<li>In the EU budget deal reached a few days ago, <a href="https://www.sciencemag.org/news/2020/07/eu-leaders-slash-science-spending-18-trillion-deal">research funding was substantially reduced</a>. It remains to decide how the cuts will impact various research programs. There are <a href="https://sciencebusiness.net/framework-programmes/viewpoint/viewpoint-research-world-lost-battle-time-start-winning-war">calls for fundamental research and the ERC to be sacrified</a>.</li>
<li>After <a href="https://www.nature.com/articles/d41586-020-01075-4">its president resigned after only three months in the job</a> last April, the ERC is currently without a president.</li>
</ul>
Important budget decisions are being negotiated right now, and the leaderless ERC seems to be in a weak position. Support for Plan S may not weigh much when billions are at stake.<br />
<br />
<h3 id="blaming-publishers-courting-researchers-new-tactics-by-coalition-s">
Blaming publishers, courting researchers: new tactics by Coalition S</h3>
On 15 July 2020, Coalition S announced its <a href="https://www.coalition-s.org/coalition-s-develops-rights-retention-strategy/">Rights retention strategy</a>, designed to ensure that research that it funds is openly accessible as soon as it is published.<br />
<br />
Rather than declaring some journals to be compliant, and forbidding researchers from publishing in non-compliant journals, the idea is now to write researchers’ contracts in such a way that their articles can be made openly accessible, no matter what the journals later say.<br />
<br />
This is apparently an important change, and could even be interpreted as a <a href="https://www.nature.com/articles/d41586-020-02134-6">shift from Gold to Green open access</a>. The change seems to answer the objection that Plan S restricts researchers’ choice of journals, which could endanger their careers in a world where articles are often judged by the journals they appear in.<br />
<br />
However, in principle, the change is not so important. A researcher can still be caught between a funder’s open access mandate, and a journal’s rules, and unable to publish if there is an incompatibility. The difference is that instead of Plan S banning the journal, it is the journal who will now reject the submitted article. The announcement by Coalition S is therefore essentially an exercise in blame shifting, aiming to have the publishers appear as the villains, and to win researchers to the cause.<br />
<br />
If this can be made to work legally, this looks like an effective tactic: which publisher would dare reject submitted articles on purely administrative grounds?<br />
<br />
The Rights retention strategy comes with a <a href="https://www.coalition-s.org/plan-s-funders-implementation/">list of implementation dates for the various members of the coalition</a>. It looks like a long-planned move, whose timing was presumably independent from the ERC’s schedule, although it came less than one week before the ERC’s withdrawal.<br />
<br />
<h3 id="a-vague-and-puzzling-announcement-trying-to-read-between-the-lines">
A vague and puzzling announcement: trying to read between the lines</h3>
It would be easy to dismiss the <a href="https://erc.europa.eu/news/erc-scientific-council-calls-open-access-plans-respect-researchers-needs">announcement</a> by ERC’s scientific council as too vague to be meaningful. However, maybe we can salvage a few pieces of useful information:<br />
<ul>
<li>The title is about <b>“respecting researchers’ needs”</b>. In the context of academic publishing, this is a code word for allowing researchers to publish in any journal, without worrying about cost or open access. Focusing about researchers’ needs means helping them thrive in the current perverse publishing system, and forgetting about reforming the system, which is the point of Plan S.</li>
<li><b>“Members of the ERC Scientific Council are participating constructively in various activities aimed at making Open Access a reality.”</b> Who cares what the members are doing? The ERC is an important institution, and what matters is what it is doing as an institution. Writing about its members means admitting that the ERC no longer has any plan or strategy about open access.</li>
<li><b>“During the past six months”</b>: Planning for the U-turn occurred when the ERC had an absentee president or no president at all.</li>
<li><b>“the ERC Scientific Council has intensified its internal debate”</b>: The debate must have been internal indeed, since the annoucement has caused <a href="https://www.researchprofessionalnews.com/rr-news-europe-infrastructure-2020-7-surprise-and-confusion-over-erc-council-s-plan-s-reversal/">surprise and confusion</a> among groups representing young researchers. The ERC claims that it is <b>“especially the needs of young researchers”</b> that motivate their decision, but apparently did not consult them.</li>
<li><b>“cOAlition S has declared that the publication of research results in hybrid venues outside of transformative arrangements will be ‘non-compliant’”</b>: This is the only feature of Plan S that the ERC is denouncing, but this issue seems to be mooted by the Rights retention strategy. <br />
</li>
</ul>
<h3 id="tentative-conclusions">
Tentative conclusions </h3>
In their <a href="https://www.coalition-s.org/coalition-s-response-to-the-erc-scientific-councils-statement-on-open-access-and-plan-s/">incredulous response to the ERC’s scientific council</a>, Coalition S appears to be as puzzled as everyone else, and not worried about its own future.<br />
<br />
However, the ERC’s U-turn is certainly not the manifestation of any reversal of opinion in the European research community, and it completely ignores the recent progress of Plan S. Maybe its origin can be found in political machinations at the European level? Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-8854511879691004232020-03-03T13:39:00.000-08:002020-03-03T13:39:20.355-08:00With open peer review, open is just the beginning<h4 id="abstract">
Abstract</h4>
<i>Open peer review does not just mean publishing existing reviewer reports, but should also lead to writing reports primarily for the public. We make a specific proposal for structured reviewer reports, based on the three criteria </i><span class="nodecor">validity</span><i>, </i><span class="nodecor">interest</span><i> and </i><span class="nodecor">clarity</span><i>.</i><br />
<i>This post is partly based on a joint proposal with <a href="https://antonakhmerov.org/">Anton Akhmerov</a> for improving the structure of reviewer reports at <a href="https://scipost.org/">SciPost</a>. Feedback from <a href="https://jscaux.org/">Jean-Sébastien Caux</a> on that proposal is gratefully acknowledged.</i><br />
<h4 id="benefits-of-open-peer-review-the-obvious-and-the-less-obvious">
</h4>
<h4 id="benefits-of-open-peer-review-the-obvious-and-the-less-obvious">
Benefits of open peer review: the obvious and the less obvious</h4>
<br />
In the traditional model of academic peer review, reviewers’ reports on the submitted article are kept confidential, and this is a big source of inefficiency and waste. If the article is published, the readers can neither assess how rigorous the process was, nor benefit directly from the reviewers’ insights. If it is rejected, the work has to start all over again at another journal.<br />
<br />
Open peer review, defined here as <b>making the reports public</b>, could help journals remedy the penury of reviewers: if applied to rejected articles, by avoiding duplicating effort, and if coupled with naming reviewers, by giving them better incentives to do the work. However, the consequences of open peer review may be more far-reaching. Published reports can indeed be used for <b>evaluating the article’s interest and quality</b>. In aggregate, they could be used for evaluating journals and researchers. For these purposes, they would certainly be <b>better than citation counts</b>.<br />
<a name='more'></a><br />
While confidential reports are written for the article’s authors and the journal’s editors, published reports are also written for the public. Eventually, they may be <b>written primarily for the public</b>. This will necessarily change how reports are written. In order to guide this change and to take full advantage of it, journals should not just make the existing reports public. They should also change the reports’ structure, in order to make reports more readable, and to make it easier to compare reports on different articles. In particular, the reports could help potential readers decide whether or not to read an article. Ideally, the reports’ format and structure should be <b>journal-independent</b>. And peer review need not be conducted solely in journals, but possibly also in platforms that would not make publish/reject decisions.<br />
<h4 id="widespread-adoption-of-open-peer-review">
</h4>
<h4 id="widespread-adoption-of-open-peer-review">
Widespread adoption of open peer review</h4>
<br />
Like open access, <b>open peer review is on its way to becoming standard</b>, with even <a href="https://www.nature.com/articles/d41586-020-00309-9">Nature tentatively joining in</a>. See <a href="https://en.wikipedia.org/wiki/Open_peer_review#Adoption_by_publishers">Wikipedia</a> for more details. It is therefore time to rethink reports.<br />
<br />
As a part of the scientific publishing system, the current organization of peer review is not determined by logic or efficiency, but by a historical trajectory that comes from paper-based communication in small communities. Open and frictionless electronic communication must change the system profoundly, and lead to a new system that differs as much from the old as Wikipedia differs from the extinct paper-based encyclopedias. In order to build this new system, a good start would be to take inspiration from the online platforms that succeed in collaboratively generating good content, such as StackExchange and Wikipedia.<br />
<br />
<h4 id="structured-reports-a-proposal">
Structured reports: a proposal</h4>
<br />
The idea is to ask the three main questions that are relevant to potential readers: is the article correct? is it worthy of attention? is it well-written? We want to answer each question by texts summarized in ratings – equivalently, by ratings backed by explanations. By a rating we mean a choice among four possible qualitative characterizations.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-dekUjZaYyO0/Xl7LUNCem7I/AAAAAAAACTw/M5zQfVVU8FklL8WRciTT4TUoU-VRFdgIACLcBGAsYHQ/s1600/proposal4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1263" data-original-width="886" height="640" src="https://1.bp.blogspot.com/-dekUjZaYyO0/Xl7LUNCem7I/AAAAAAAACTw/M5zQfVVU8FklL8WRciTT4TUoU-VRFdgIACLcBGAsYHQ/s640/proposal4.png" width="448" /></a></div>
<br />
A few more details on the proposal:<br />
<ol>
<li><b>Open participation</b>: in addition to invited reviewers, anyone could be allowed to write reports, starting with the authors themselves. However, all fields are optional: for example, some non-experts might comment on the clarity only.</li>
<li><b>Quick exchanges between reviewers, authors and readers</b>: in the fashion of <a href="https://stackexchange.com/">StackExchange</a>, it would be good to allow short comments below each text field. Reports and short comments should appear online immediately: any vetting should be done a posteriori. Moreover, small corrections to the reports should be allowed.</li>
<li><b>Official ratings</b> could be given by journal editors when concluding the peer review process. These official ratings could be displayed prominently (possibly in graphic form), in order to ease comparisons between papers, and to provide benchmarks to future reviewers.</li>
<li><b>Inciting reviewers to renounce anonymity:</b> the anonymity checkbox comes at the start, with No as the default choice.</li>
<li><b>Versioning:</b> since an article can have several versions, there should be a mechanism for the reports to be linked to these versions.</li>
</ol>
<h4 id="case-study-open-peer-review-at-scipost-and-how-to-improve-it">
</h4>
<h4 id="case-study-open-peer-review-at-scipost-and-how-to-improve-it">
Case study: Open peer review at SciPost and how to improve it</h4>
<h4 id="case-study-open-peer-review-at-scipost-and-how-to-improve-it">
</h4>
To be specific, let us focus on the structure of reports at <a href="https://scipost.org/">SciPost</a>, a recently born family of open access, open peer review journals. In a few years, SciPost reviewers have written hundreds of publicly available structured reports. While not attempting a systematic analysis of these reports, we will discuss how their structure could be improved.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-ZO_gfttdFhE/Xl7NON6nwUI/AAAAAAAACT8/8Rp8O4LgXJ4FEQirLQiEc7BQA4ry1HdQwCLcBGAsYHQ/s1600/scipost.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1199" data-original-width="1064" height="640" src="https://1.bp.blogspot.com/-ZO_gfttdFhE/Xl7NON6nwUI/AAAAAAAACT8/8Rp8O4LgXJ4FEQirLQiEc7BQA4ry1HdQwCLcBGAsYHQ/s640/scipost.png" width="566" /></a></div>
<br />
Main possible improvements to these reports:<br />
<ol>
<li><b>Eliminate the recommendation to publish or not</b>, which is journal-dependent and of little interest to the public. From a well-structured report, a journal editor should easily be able to infer a recommendation. The hope is also to free reviewers from thinking about this difficult and artificial issue, and to diminish their power with respect to authors. This could make exchanges with authors more constructive.</li>
<li><b>Avoid vaguely defined text fields</b> like Strengths and Weaknesses: a more precise structure would be better.</li>
<li><a href="http://researchpracticesandtools.blogspot.com/2014/03/rating-scientific-articles-why-and-how.html"><b>Too many areas, too many ratings</b></a>: 6 rating areas with 6 or 7 possible ratings per area. Reviewers end up giving ratings more or less arbitrarily, or not at all.</li>
<li><b>The ratings should come with explanatory text fields</b>, in other words, they should emerge from the report’s structure.</li>
</ol>
Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-11789714770687623042019-11-27T09:45:00.000-08:002019-11-29T05:43:53.765-08:00Preprints and other tools of open research: contribution to the Open Access roundtable at GFP 2019In the context of the conference <a href="https://gfp2019.sciencesconf.org/">GFP 2019</a> on polymer chemistry, I am taking part in a roundtable on Open Access. Chemists are coming quite late to the Open Access debate. The preprint archive <a href="https://en.wikipedia.org/wiki/ChemRxiv">Chemrxiv</a> is young, not widely used, and not independent from publishers. The traditional subscription-based publishing system, and the standard bibliometric indicators, dominate communication and evaluation. And when chemists are dragged into the debate by discipline-agnostic initiatives such as <a href="https://en.wikipedia.org/wiki/Plan_S">Plan S</a>, their <a href="https://sites.google.com/view/plansopenletter/open-letter">positions</a> tend to be conservative.<br />
<br />
Inevitably, chemists are being affected by Open Access and other evolutions of the research system, whether or not these evolutions seem beneficial to them. It would be useful for chemists to know more about preprints and other tools of scientific communication, beyond the traditional journals: not only to comply with Open Access mandates, but also to make their own choices among the existing innovations and best practices.
<br />
<a name='more'></a><br />
<h4 id="introducing-myself">
</h4>
<h4 id="introducing-myself">
Introducing myself</h4>
<ul>
<li>Researcher in theoretical physics, employed by CNRS, working at CEA Saclay.</li>
<li>Contributor to open and collaborative platforms such as GitHub, Wikipedia, StackExchange.</li>
<li>Editorial board of the Wikijournal of Science, a journal that publishes Wikipedia-style articles.</li>
<li>A blog called “Research practices and tools” for discussing open science, and publishing the reviewer reports that I write.</li>
</ul>
<h4 id="administrative-archives-vs-disciplinary-archives">
</h4>
<h4 id="administrative-archives-vs-disciplinary-archives">
Administrative archives vs disciplinary archives</h4>
<h4 id="administrative-archives-vs-disciplinary-archives">
</h4>
There are two broad categories of preprint archives, which I will call <i>administrative</i> and <i>disciplinary</i>. (<i>Administrative</i> archives might also be called <i>institutional</i> archives, or <i>local</i> archives, since they cover an institution, country, or region.) Here is a synthetic table of their features, with some more details below:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-3MwxaClNbNA/Xd6xqjX7LmI/AAAAAAAACRs/hkQy1uNYxsostcoxEM-9ZmLYJptKz4rdACLcBGAsYHQ/s1600/gfp1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="599" data-original-width="1190" height="200" src="https://1.bp.blogspot.com/-3MwxaClNbNA/Xd6xqjX7LmI/AAAAAAAACRs/hkQy1uNYxsostcoxEM-9ZmLYJptKz4rdACLcBGAsYHQ/s400/gfp1.png" width="400" /></a></div>
The largest and oldest disciplinary archive is Arxiv. It was started in 1991 by theoretical physicists as a way to make the distribution of preprints more efficient. (Before that, they were using email. Even before, just mail.) Arxiv is where many physicists find and read their colleagues’ articles, long before they appear in journals. In fields where all papers are on Arxiv, keeping up with the literature requires no more effort than checking the list of new preprints every day.<br />
<br />
It was initially thought that Arxiv might entirely replace scientific journals, but this did not happen, even though journals’ role diminished greatly. Arxiv preprints can be submitted to journals, and in many journals the submission procedure involves little more than entering an Arxiv preprint number.<br />
<br />
Administrative archives are not built by researchers, but by librarians or administrators. Their purposes are to keep track of researchers’ output, allow researcher to fulfill Open Access mandates, and allow them to disseminate documents that are unsuitable for disciplinary archives. In particular, administrative archives accept submissions from all disciplines. Research articles are typically deposited after they are published in journals: a researcher may deposit an article after the journal’s embargo has expired, or when her employer needs an up-to-date list of her publications. For example, CNRS no longer asks me for my list of publications: CNRS takes it automatically from HAL, and any publication that is missing from HAL is ignored.<br />
<br />
Both types of archives allow the sharing of various types of documents beyond research articles. A small percentage of Arxiv preprints are commentaries (sometimes demolitions) of other articles, or replies by the authors of a criticized article. Each archive has its own rules about which types are acceptable. Archives also allow documents to be updated, while keeping track of older versions.<br />
<br />
<h4 id="three-disciplinary-archives">
Three disciplinary archives</h4>
<h4 id="three-disciplinary-archives">
</h4>
Disciplinary preprint servers share a number of basic features:<br />
<ul>
<li>Free to read, free to publish. (Costs are about 10$ per article, covered by institutional subsidies.)</li>
<li>Permanent, irrevocable archiving.</li>
<li>Indexed by Google Scholar and others.</li>
<li>Basic screening, no formal peer review.</li>
<li>Establish priority.</li>
</ul>
A comparison of three important disciplinary archives:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-A4vdYftIfEQ/Xd6y9pVFggI/AAAAAAAACR4/ybveLSfGP0c2l6l1_WVyDOHVwlqG3tOJwCLcBGAsYHQ/s1600/gfp2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="687" data-original-width="1190" height="230" src="https://1.bp.blogspot.com/-A4vdYftIfEQ/Xd6y9pVFggI/AAAAAAAACR4/ybveLSfGP0c2l6l1_WVyDOHVwlqG3tOJwCLcBGAsYHQ/s400/gfp2.png" width="400" /></a></div>
A few details on this table:<br />
<ul>
<li>Arxiv covers a number of disciplines, including some biology and some finance, but mainly it is physics, mathematics and computer science.</li>
<li>Chemrxiv is an initiative of the American Chemical Society and its counterparts in other countries. The ACS is also a major publisher, with a very aggressive stance towards unauthorized sharing of scientific articles. Therefore, we may expect Chemrxiv to be less indifferent than Arxiv of Biorxiv to the prosperity of legacy journals. It is unclear whether a researcher-controlled chemistry archive could still emerge.</li>
<li>Biorxiv is the only archive that allow people to write comments about preprints. However the rate of comments is low. Twitter seems to be a more popular venue for publicly discussing preprints.</li>
</ul>
<h4 id="life-of-a-research-article">
</h4>
<h4 id="life-of-a-research-article">
Life of a research article</h4>
<h4 id="life-of-a-research-article">
</h4>
So how exactly do we use Arxiv? In disciplines where all papers are on Arxiv, here is what typically happens to an article:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-9gJXu8oC3N4/Xd6zp88vmnI/AAAAAAAACSA/Jep61JoRnCk45aZAjS0rfd3qRzfkKfC7ACLcBGAsYHQ/s1600/gfp3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="677" data-original-width="1190" height="227" src="https://1.bp.blogspot.com/-9gJXu8oC3N4/Xd6zp88vmnI/AAAAAAAACSA/Jep61JoRnCk45aZAjS0rfd3qRzfkKfC7ACLcBGAsYHQ/s400/gfp3.png" width="400" /></a></div>
<ul>
<li>The first Arxiv version may be the version that most readers read, so it had better be good.</li>
<li>Feedback and improvements are not limited to the journal’s peer review process: they can also come before the paper is submitted to a journal, and after it is published. Informal peer review via exchanges with readers takes a renewed importance. For an example in chemistry, see the comments to <a href="https://www.ch.imperial.ac.uk/rzepa/blog/?p=21037">this blog post by Henry Rzepa</a>, including references to the relevant Chemrxiv preprints.</li>
<li>Post-publication versions are used for correcting mistakes small and large. Errata are rarely sent to the journal. And there is little need to fight reviewers if their ask for changes that I disagree with: I can always have my preferred version on Arxiv.</li>
<li>It is possible to submit an unlimited number of versions to Arxiv. Typical papers only have a few versions. But there are also living reviews that are regularly updated.</li>
</ul>
<h4 id="arxiv-saturation-and-endgame">
</h4>
<h4 id="arxiv-saturation-and-endgame">
Arxiv saturation and endgame</h4>
<h4 id="arxiv-saturation-and-endgame">
</h4>
The Arxiv <a href="https://arxiv.org/help/stats/2018_by_area/index">submission statistics</a> show a marked growth in some subfields, and a stationary situation in others.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-9GJ1QLA6vyQ/Xd60TNFtmvI/AAAAAAAACSI/HcaG5ASs6iM1bbyNvku2JSQnBpCft7NxgCLcBGAsYHQ/s1600/gfp4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="440" data-original-width="514" height="340" src="https://1.bp.blogspot.com/-9GJ1QLA6vyQ/Xd60TNFtmvI/AAAAAAAACSI/HcaG5ASs6iM1bbyNvku2JSQnBpCft7NxgCLcBGAsYHQ/s400/gfp4.png" width="400" /></a></div>
Most of high-energy physics (blue) and condensed-matter physics (green) have stopped growing years ago, because all articles in these fields are on Arxiv. This has <a href="http://researchpracticesandtools.blogspot.com/2018/03/the-open-secrets-of-life-with-arxiv.html">many consequences</a> for the work of researchers. In particular, publication in journals is now optional. A famous example: the mathematician Grigori Perelman was offered the 2006 Fields medal for works that appeared in Arxiv preprints and were never submitted to journals. Of course, these works were thoroughly peer-reviewed, but the peer reviewing was not organized through journals.<br />
<br />
This shows that we could easily do research without journals. Nevertheless, journals are alive and well, mainly due to their continued influence on academic careers. Estimating the quality of a work from the journal it appears in remains common practice, although it is <a href="https://en.wikipedia.org/wiki/San_Francisco_Declaration_on_Research_Assessment">widely denounced</a>.<br />
<br />
<h4 id="beyond-open-access-open-peer-review">
Beyond open access: open peer review</h4>
<h4 id="beyond-open-access-open-peer-review">
</h4>
After open access, the next frontier of scientific publishing may be open peer review. Open peer review may mean different things:<br />
<ul>
<li>publishing reviewer reports, typically for accepted articles, as this is harder for rejected articles,</li>
<li>accepting spontaneous reviews in addition to invited reviews,</li>
<li>publishing reviewer names, unless the reviewers decline to do so.</li>
</ul>
Two recently created families of OA journals that practice open peer review:<br />
<ul>
<li>PeerJ Chemistry (started 2018): 5 journals covering all of chemistry.</li>
<li>SciPost Chemistry (coming soon): will be free to read and to publish, funded by subsidies.</li>
</ul>
Originally, these are successful journals in biology and physics respectively, now expanding to chemistry.<br />
<br />
<h4 id="beyond-research-articles">
Beyond research articles</h4>
<h4 id="beyond-research-articles">
</h4>
Research articles can carry only certain types of information. Scientific communication involves other media, some of which are well-suited to open, collaborative works:<br />
<ul>
<li>According to this <a href="https://www.ch.imperial.ac.uk/rzepa/blog/?p=20342#more-20342">blog post</a> by chemist Henry Rzepa, sharing <b>data</b> effectively may be more important than having openly accessible articles. To be useful, data should be FAIR: Findable, Accessible, Inter-operable and Re-usable. See this <a href="https://www.ch.imperial.ac.uk/rzepa/blog/?p=20394#more-20394">other post</a> for an example of publishing data independently of the article.</li>
<li><b>Code</b> can be written collaboratively at GitHub, GitLab, etc. These collaborative code repositories use the version control system Git, originally created for developing the Linux operating system.</li>
<li><b>StackExchange</b> is a family of question and answers websites, with a sophisticated voting and reputation system that is very effective at promoting good content and good contributors. (In comparison, the impact factor and the H-index are crude.)</li>
<li><b>Wikipedia</b> has many readers (<a href="https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org&platform=all-access&agent=user&range=latest-20&pages=Neoprene">600/day for Neoprene</a>) but <a href="http://researchpracticesandtools.blogspot.com/2017/01/why-dont-academics-write-in-wikipedia.html">too few writers among academics</a>. Is it always more important to write a research article for a few colleagues, than a Wikipedia article for thousands of readers?</li>
</ul>
Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-79696928882970689642019-10-01T12:21:00.000-07:002019-10-01T12:37:06.941-07:00Learning scientific writing from great writersFor scientists, writing well (or well enough) is a critical skill, as written texts are essential for communicating research. Of course, not every scientist should be able to write well, as some may rely on collaborators. In a lecture on "<a href="http://www.lassp.cornell.edu/mermin/KnightLecture.html">Writing physics</a>", David Mermin emphasizes the importance of language and writing through a famous example:<br />
<blockquote class="tr_bq">
It is also said that even Landau's profound technical papers were actually written by Lifshitz.
Many physicists look down on Lifshitz: Landau did the physics, Lifshitz wrote it up. I don't
believe that for a minute. If Evgenii Lifshitz really wrote the amazing papers of Landau, he was
doing physics of the highest order. Landau was not so much a coauthor, as a natural phenomenon
— an important component of the remarkable coherence of the physical world that Lifshitz
wrote about so powerfully in the papers of Landau.
</blockquote>
<br />
<a name='more'></a>Not everyone writes like Lifshitz (or Mermin), and we often encounter impenetrable writing. We should resist the temptation of blaming the subject matter rather than the author. Impenetrable technical writing is not a necessity: rather, it is disease that spreads itself when scientists imitate their predecessors. <br />
<br />
<br />
<div class="article-item__title serif" itemprop="headline">
How to write well and how to learn writing well are recurring problems for scientists. The problem has a nearly tautological solution: learn writing from writers. This solution is far from being universally accepted, and the claim that scientific writing skills are mostly the same as general writing skills <a href="https://academia.stackexchange.com/questions/117413/how-to-improve-scientific-writing-skills/117474#117474">appears controversial</a>. Fortunately, some scientists are wise enough to learn writing not just from writers but from great writers, and to share the experience through a recent Nature career column titled "<span style="font-weight: normal;"><a href="https://www.nature.com/articles/d41586-019-02918-5">Novelist Cormac McCarthy’s tips on how to write a great science paper</a>". I certainly agree with the gist of the column and with most of the tips, but I would object to two specific tips:</span><span style="font-weight: normal;"> </span></div>
<blockquote class="tr_bq">
<div class="article-item__title serif" itemprop="headline">
<span style="font-weight: normal;">And don’t use the same word repeatedly — it’s boring.</span></div>
</blockquote>
In technical writing, we often cannot afford the luxury of using synonyms. Pronouns are dangerous too, as they can lead to ambiguity. Repetition of words should be tolerated more readily than in literary texts.<br />
<blockquote class="tr_bq">
Minimize clauses, compound sentences and transition words — such as
‘however’ or ‘thus’ — so that the reader can focus on the main message. </blockquote>
Minimizing clauses and compound sentences is good, but minimizing transition words is dangerous, as the logic of a scientific argument should be as clear and explicit as possible.<br />
<br />
My disagreements with McCarthy's advisees Van Savage and Pamela Yeh come down to putting a premium on clarity and lack of ambiguity in scientific texts. Yes, I am willing to risk boring or distracting the reader with my repetitions and transition words, but what is more boring and distracting than having to reread an unclear sentence or paragraph?<br />
<br />
These minor objections aside, the column is full of good advice, including avoiding footnotes. The nicest tip is however implicit: in order to learn writing from great writers, just read their writings. The English language comes with a practically unlimited supply of great fiction, non-fiction and journalism: learning to write well can be very pleasant indeed.Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-76977647076419504562019-05-31T06:23:00.000-07:002019-05-31T06:23:07.657-07:00Uniqueness of the $2d$ critical Ising model<i>This post is motivated by a request from JHEP to review a <a href="https://arxiv.org/abs/1904.09801">recent article</a> by Anton de la Fuente. I am grateful to the author for stimulating correspondence.</i><br />
<h4 id="the-conformal-bootstrap-analytic-vs-numerical">
</h4>
<h4 id="the-conformal-bootstrap-analytic-vs-numerical">
The conformal bootstrap: analytic vs numerical</h4>
<h4 id="the-conformal-bootstrap-analytic-vs-numerical">
</h4>
The critical Ising model is described by a unitary conformal field theory. In two dimensions, that theory is part of a family called minimal models, which can be exactly solved in the analytic bootstrap framework of Belavin, Polyakov and Zamolodchikov. Minimal models are parametrized by two coprime integers <span class="math inline">2 ≤ <i>p</i> < <i>q</i></span>, they are unitary when <span class="math inline"><i>q</i> = <i>p</i> + 1</span>, and the Ising model is the case <span class="math inline">(<i>p</i>, <i>q</i>)=(3, 4)</span>.<br />
<br />
These <span class="math inline">2<i>d</i></span> bootstrap results date back to the 1980s. More recently, the bootstrap method has been successfully used in higher dimensional CFTs, such as the <span class="math inline">3<i>d</i></span> Ising model. While the basic ideas are the same, there are important technical differences between <span class="math inline">2<i>d</i></span> and higher <span class="math inline"><i>d</i></span>. <br />
<a name='more'></a>In <span class="math inline">2<i>d</i></span>, the infinite-dimensional Virasoro symmetry algebra yields enough equations for exactly solving minimal models. In higher <span class="math inline"><i>d</i></span>, the conformal algebra is finite-dimensional, and yields fewer equations. A popular bootstrapping technique is to also use the inequalities that follow from the assumption of unitarity. In the space of parameters (parameters such as the energy levels), these inequalities carve a region of consistent theories. The game is to make enough assumption for this region to be very small, in which case we can determine the values of the parameters for a given model. In this sense, solving a CFT means showing that it is unique under certain assumptions. In practice, a CFT has infinitely many parameters, and we focus on a finite number of parameters, getting rid of the others using inequalities: the method is approximate, and yields numerical results.<br />
<br />
These higher <span class="math inline"><i>d</i></span> methods can be applied to <span class="math inline">2<i>d</i></span> models, and their results compared to known exact <span class="math inline">2<i>d</i></span> results. This is a priori interesting not only for testing the methods, but also for learning more about <span class="math inline">2<i>d</i></span> CFT. Not all <span class="math inline">2<i>d</i></span> CFTs are expected to be exactly solvable: minimal models are defined by the assumption that the spectrum is made of finitely many irreducible representations of the Virasoro algebra, but it is less well-known what happens if we allow infinitely many representations.<br />
<br />
<h4 id="case-of-the-2d-ising-model">
Case of the <span class="math inline">2<i>d</i></span> Ising model</h4>
<h4 id="case-of-the-2d-ising-model">
</h4>
In his <a href="https://arxiv.org/abs/1904.09801">recent article</a>, de la Fuente applies the higher <span class="math inline"><i>d</i></span> bootstrap techniques to the <span class="math inline">2<i>d</i></span> Ising model. He finds that the model is unique under a number of assumptions: unitarity, conformal and <span class="math inline">ℤ<sub>2</sub></span> symmetry, and bounds on the dimensions of a few operators. This allows him to compute the dimensions of these operators with about three significant digits, which is convincing evidence that we can recover the known results with this method.<br />
<br />
Aside from validating numerical bootstrap techniques, have we learned something new? To answer this question, let me give a more detailed reminder of known analytic results. The Virasoro algebra comes with a parameter called the central charge, whose value is <span class="math inline">$c=\frac12$</span> for the Ising model. We know all unitary representations of the Virasoro algebra with <span class="math inline"><i>c</i> < 1</span>: so, if we had a Virasoro symmetry algebra with <span class="math inline"><i>c</i> < 1</span>, we could easily get uniqueness results for the Ising model, optimize them by relaxing the assumptions as much as possible, and understand what happens when uniqueness breaks down. This raises two questions:<br />
<ul>
<li>Do we have a Virasoro symmetry algebra? In order to use the numerical bootstrap techniques, de la Fuente assumes that we have global conformal symmetry. But Virasoro symmetry is <a href="https://arxiv.org/abs/1902.05273">known to follow</a> from <span class="math inline">2<i>d</i></span> global conformal symmetry, unitarity, and discreteness of the spectrum. Discreteness is not assumed, as it is not needed for the numerical bootstrap. Discreteness is probably not quite necessary for Virasoro symmetry either: it might be possible to do with weaker assumptions, such as de la Fuente’s gap in the spin <span class="math inline">2</span> sector.</li>
<li>If we have a Virasoro algebra, what is its central charge? We do not know, but maybe we could find out using inequalities from unitarity, as I will argue below.</li>
</ul>
This suggests that we might seek to prove Virasoro symmetry and find bounds on the central charge, rather than directly constrain parameters such as energy levels. By combining analytic and numerical bootstrap techniques, one might obtain more powerful results. It is not clear how easy this would be, and I will now sketch how one might get started.<br />
<h4 id="combining-analytic-and-numerical-bootstrap-techniques">
</h4>
<h4 id="combining-analytic-and-numerical-bootstrap-techniques">
Combining analytic and numerical bootstrap techniques?</h4>
<h4 id="combining-analytic-and-numerical-bootstrap-techniques">
</h4>
We want bounds on the central charge, but the known unitary bootstrap techniques give bounds on conformal dimensions (= energy levels) and on OPE coefficients. So let us relate the central charge to OPE coefficients. Let <span class="math inline"><i>σ</i></span> be a scalar primary field, and <span class="math inline"><i>T</i></span> the energy-momentum tensor. If we had Virasoro symmetry we would have the OPEs <br /><span class="math display">$$T(y)\sigma(z) = \frac{\frac12 \Delta_\sigma\sigma(z)}{(y-z)^2} + \cdots$$</span><br /><span class="math display">$$T(y)T(z) = \frac{\frac12 c }{(y-z)^4} + \cdots$$</span><br /> where <span class="math inline"><i>Δ</i><sub><i>σ</i></sub></span> is the total (left + right) conformal dimension. The interpretation is that we have the OPE coefficient <span class="math inline">$\lambda_{\sigma\sigma T}= \frac12 \Delta_\sigma$</span>, with <span class="math inline"><i>T</i></span> normalized such that <span class="math inline">$\left<TT\right> = \frac12 c$</span>. However, if we forget about Virasoro symmetry and use global conformal symmetry instead, the natural normalization becomes <span class="math inline">$\left<TT\right> =1$</span>, and the OPE coefficient becomes <br /><span class="math display">$$\lambda_{\sigma\sigma T} = \frac{\Delta_\sigma}{\sqrt{2c}} \qquad \text{or equivalently} \qquad c = \frac12 \left(\frac{\Delta_\sigma}{\lambda_{\sigma\sigma T}}\right)^2$$</span><br /> In order to get an upper bound on <span class="math inline"><i>c</i></span>, we therefore need an upper bound on <span class="math inline"><i>Δ</i><sub><i>σ</i></sub></span>, and a lower bound on <span class="math inline"><i>λ</i><sub><i>σ</i><i>σ</i><i>T</i></sub></span>. In the article we have marginality bounds such as <span class="math inline"><i>Δ</i><sub><i>σ</i></sub> < 2</span> for some scalar primary fields: the remaining task is to find lower bounds on <span class="math inline"><i>λ</i><sub><i>σ</i><i>σ</i><i>T</i></sub></span>.<br />
<br />
The bounds can afford to be far from optimal, as we only need to show <span class="math inline"><i>c</i> < 1</span> while the Ising model has <span class="math inline">$c=\frac12$</span>. The numerical implementation of the unitary bootstrap is designed for finding bounds that are very close to exact values: maybe the coarse bounds that we need can be obtained analytically. The hope is that such techniques can lead to uniqueness statements for all unitary minimal models. (Non-unitary CFTs would require different techniques.) Ultimately, the interesting and difficult problem is to discover new unitary CFTs with <span class="math inline"><i>c</i> ≥ 1</span>.Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com2tag:blogger.com,1999:blog-9119793002820072645.post-39426744102223262062019-05-27T13:27:00.000-07:002019-05-27T13:27:52.829-07:00Academics and Wikipedia: the WikiJournal experimentSince November 2017, I have been an editor of the <a href="https://en.wikiversity.org/wiki/WikiJournal_of_Science">WikiJournal of Science</a>, a Wikipedia-integrated, broad scope, libre open access journal. For me this is one way of encouraging academics to write in Wikipedia, by making it possible to publish Wikipedia articles in a recognized academic journal. The WikiJournals as they now exist may not yet be ideal for that, but they are already providing valuable insights into the difference between Wikipedia standards and academic standards, academics' attitudes towards Wikipedia, etc. <br />
<br />
I am discussing these insights in <a href="https://en.wikipedia.org/wiki/User:Sylvain_Ribault/WJS_essay">a Wikipedia essay</a>, for which this blog post is an announcement. This leads me to suggest that WikiJournals be radically reformed - or that any organization with similar aims should follow a different approach. The essay can be discussed at its <a href="https://en.wikipedia.org/wiki/User_talk:Sylvain_Ribault/WJS_essay">Talk page</a>. Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com1tag:blogger.com,1999:blog-9119793002820072645.post-58122675298337160662019-05-09T14:26:00.000-07:002019-05-13T07:36:58.735-07:00One software to rule them all? Open source alternatives to Mathematica<i>This post is based on a joint talk with Riccardo Guida given at IPhT Saclay on May 7th.</i><br />
<br />
Wolfram’s Mathematica has been the dominant computer algebra system for decades (at least in theoretical physics), and in an advertisement it even compared itself to Sauron’s One Ring.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-96hmhr-hZhk/XNSTKdlThfI/AAAAAAAACOE/lW6ebGOq_KIAmL1eI38DDwCHss2_0ruywCLcBGAs/s1600/ring.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="260" data-original-width="208" src="https://4.bp.blogspot.com/-96hmhr-hZhk/XNSTKdlThfI/AAAAAAAACOE/lW6ebGOq_KIAmL1eI38DDwCHss2_0ruywCLcBGAs/s1600/ring.png" /></a></div>
<br />
Mathematica’s dominance however does not come from black magic, but rather from its quality and power compared to other available computer algebra systems. But dominant positions are often abused, and Wolfram’s commercial practices can verge on the abusive, though much less systematically than say Elsevier’s. In this blog post we will denounce the problems with Mathematica, and discuss four open source <a href="https://en.wikipedia.org/wiki/List_of_computer_algebra_systems">alternatives</a>: <i><b>SymPy</b></i>, <i><b>SageMath</b></i>, <i><b>Maxima</b></i>, and <i><b>FriCAS</b></i>. In the case of <i><b>SymPy</b></i>, we will also provide a <a href="https://www.ipht.fr/Docspht//articles/t19/051/public/demo_notebook_output.html">demonstration notebook</a>.<br />
<h4 id="the-trouble-with-mathematica">
<a name='more'></a> </h4>
<h4 id="the-trouble-with-mathematica">
The trouble with Mathematica</h4>
<h4 id="the-trouble-with-mathematica">
</h4>
Since a Mathematica license comes at zero marginal cost to Wolfram, pricing is determined by Wolfram’s commercial policy, and can <b>vary a lot between customers</b>. Licenses are typically cheap for students; prices are higher for richer customers, and particularly high for non-academic institutions.<br />
<br />
At IPhT Saclay, we have seen rather <b>wild shifts in pricing</b>: we used to get 200 institute-wide licences for about 25.000 euros a year, we now get 10 licences for the same price. (Anyone at IPhT can use these licences, but we can run no more than 10 instances of Mathematica simultaneously.) Such shifts do not obey consistent rules, and depend a lot on the particular salesperson we are dealing with at a given time. We are sometimes offered bargains: for example, we recently bought cheap perpetual licenses for individual users on MacOS. Alas, in this case, the so-called perpetual licenses turned out to be unusable on the next version of MacOS – unless we paid 700 euros per license for an upgrade.<br />
<br />
The main issue with Mathematica may however not be its price, but more generally its availability. If you write some Mathematica code now, there is <b>no guarantee</b> that it can be run by your collaborators, the readers of your articles, or even by yourself in the future. (And the problem is compounded by the evolution of Mathematica, with one new major version every 2-3 years on average.) Your work is now subject to control by Wolfram: this is inconvenient and expensive nowadays, and could become disastrous if Wolfram got serious about milking the cow one day.<br />
<h4 id="advantages-of-open-source-software">
</h4>
<h4 id="advantages-of-open-source-software">
Advantages of open source software</h4>
<h4 id="advantages-of-open-source-software">
</h4>
In addition to solving the issues of cost and availability, open source software has a number of advantages:<br />
<ul>
<li>You can examine what the software does, and <b>need not <a href="https://reference.wolfram.com/language/tutorial/WhyYouDoNotUsuallyNeedToKnowAboutInternals.html">trust a black box</a></b>.</li>
<li>You can <b>influence development</b> not only by reporting bugs, but also by writing code yourself, or <a href="https://www.quansight.com/projects">giving money</a> for adding a particular feature.</li>
<li><b>Collaborative coding</b> is easier with open source software, since people only need access to your code for being able to check, reproduce and improve it.</li>
</ul>
This may sound well and good, but are there viable open source alternatives to Mathematica? The answer would not have been obvious 10 or 15 years ago, but open source alternatives have progressed a lot since then, and the answer is a qualified yes nowadays. To be fair, Mathematica certainly remains the most complete and advanced computer algebra system. But we are not asking competitors to perform as well as Mathematica in all respects, we are only asking them to <b>meet the needs of most researchers</b>.<br />
<br />
We will argue that open source competitors are good enough for allowing many researchers to do without Mathematica. The main remaining reasons that can prevent people from switching to open source are:<br />
<ul>
<li>if they need a particular advanced feature of Mathematica that is absent in the competitors,</li>
<li>if they are captive of a large existing body of code and/or specialized Mathematica packages,</li>
<li>if they belong to large collaborations, which would require a concerted switch.</li>
</ul>
<h4 id="computer-algebra-systems-at-ipht">
</h4>
<h4 id="computer-algebra-systems-at-ipht">
Computer algebra systems at IPhT</h4>
<h4 id="computer-algebra-systems-at-ipht">
</h4>
We did a survey of computer algebra systems at IPhT, and got 42 answers (about half the researchers). The question was:<br />
<blockquote>
<i>Which computer algebra system(s) did you use in the last 3 years?</i></blockquote>
<style>
#mytable{
width:50%
}
#mytable, th, td {
border: 1px solid black;
border-collapse: collapse;
}
#mytable th, #mytable td {
padding: 3px;
text-align: right;
}
</style>
<br />
<table id="mytable"><thead>
<tr class="header"><th align="center">Name</th><th align="center">Open?</th><th align="center"># Users</th></tr>
</thead><tbody>
<tr class="odd"><td align="center">Mathematica</td><td align="center"></td><td align="center">35</td></tr>
<tr class="even"><td align="center">Maple</td><td align="center"></td><td align="center">7</td></tr>
<tr class="odd"><td align="center">Matlab</td><td align="center"></td><td align="center">3</td></tr>
<tr class="even"><td align="center"><i><b>SymPy</b></i>/NumPy/SciPy</td><td align="center">✔</td><td align="center">10</td></tr>
<tr class="odd"><td align="center"><i><b>SageMath</b></i></td><td align="center">✔</td><td align="center">2</td></tr>
<tr class="even"><td align="center"><i><b>Maxima</b></i></td><td align="center">✔</td><td align="center">4</td></tr>
<tr class="odd"><td align="center"><i><b>FriCAS</b></i></td><td align="center">✔</td><td align="center">1</td></tr>
<tr class="even"><td align="center"><i><b>Cadabra</b></i></td><td align="center">✔</td><td align="center">1</td></tr>
</tbody></table>
<br />
As expected, Mathematica is more popular than all the alternatives combined. The nontrivial result is that most popular alternative is <i><b>SymPy</b></i> (together with other Python packages), which is used by nearly one quarter of respondents.<br />
<h4 id="four-open-source-computer-algebra-systems">
</h4>
<h4 id="four-open-source-computer-algebra-systems">
Four open source computer algebra systems</h4>
<br />
We begin with a technical fact sheet:<br />
<style>
#mytable2{
width:100%
}
#mytable, th, td {
border: 1px solid black;
border-collapse: collapse;
}
#mytable th, #mytable td {
padding: 3px;
text-align: right;
}
</style>
<br />
<table id="mytable2"><thead>
<tr class="header"><th align="left"></th><th align="left"><i><b>SymPy</b></i></th><th align="left"><i><b>SageMath</b></i></th><th align="left"><i><b>Maxima</b></i></th><th align="left"><i><b>FriCAS</b></i></th></tr>
</thead><tbody>
<tr class="odd"><td align="left">Initial release</td><td align="left">2007</td><td align="left">2005</td><td align="left">1998</td><td align="left">2007</td></tr>
<tr class="even"><td align="left">Non-OS base</td><td align="left">–</td><td align="left">–</td><td align="left">Macsyma 1982</td><td align="left">Axiom 1977</td></tr>
<tr class="odd"><td align="left">Contributors</td><td align="left"><span class="math inline">59</span></td><td align="left"><span class="math inline">63</span></td><td align="left"><span class="math inline">11</span></td><td align="left"><span class="math inline">3</span></td></tr>
<tr class="even"><td align="left">User Language</td><td align="left">Python</td><td align="left">Python-ish</td><td align="left"><i><b>Maxima</b></i></td><td align="left">SPAD</td></tr>
<tr class="odd"><td align="left">Interpreter</td><td align="left">CPython, PyPy</td><td align="left">CPython</td><td align="left">Common Lisp</td><td align="left">Common Lisp</td></tr>
<tr class="even"><td align="left">Expected speed</td><td align="left"><span class="math inline"><i>C</i>/50</span></td><td align="left"><span class="math inline"><i>C</i>/50</span> to <span class="math inline"><i>C</i></span></td><td align="left"><span class="math inline"><i>C</i>/5</span></td><td align="left"><span class="math inline"><i>C</i>/5</span></td></tr>
<tr class="odd"><td align="left">Notebooks</td><td align="left">Jupyter, TeXmacs</td><td align="left">Jupyter, TeXmacs</td><td align="left">wxMaxima, TeXmacs</td><td align="left">TeXmacs</td></tr>
</tbody></table>
<br />
The fine print:<br />
<ul>
<li>By contributors we mean how many people did more than 9 commits between 2018-03-15 and 2019-03-15. Obviously <i><b>SymPy</b></i> and <i><b>SageMath</b></i> are much more actively developed than <i><b>FriCAS</b></i> and <i><b>Maxima</b></i>: this is particularly impressive in the case of <i><b>SymPy</b></i>, whose scope is narrower than <i><b>SageMath</b></i>’s. Both <i><b>SymPy</b></i> and <i><b>SageMath</b></i> regularly employ Google-funded interns.</li>
<li>Both <i><b>SymPy</b></i> and <i><b>SageMath</b></i> are based on the Python programming language, a very widespread and easy language. Google can quickly find an answer to almost any plausible question about Python. <i><b>FriCAS</b></i> has a dedicated language inspired by mathematics, and <i><b>Maxima</b></i>’s language is not far from Mathematica’s.</li>
<li>The expected speed is compared to that of the <span class="math inline"><i>C</i></span> programming language. Programs based on Python are slower, except when Python serves as an interface to code that actually runs in <span class="math inline"><i>C</i></span>. There is an <a href="https://github.com/symengine/symengine">ongoing effort</a> to speed up <i><b>SymPy</b></i> and <i><b>SageMath</b></i> in this way.</li>
</ul>
<i><b>SymPy</b></i>, <i><b>SageMath</b></i>, <i><b>Maxima</b></i> and <i><b>FriCAS</b></i> are multi-platform, generalist computer algebra systems that have the expected basic features, including:<br />
<ul>
<li>limits,</li>
<li>derivatives,</li>
<li>Taylor series,</li>
<li>basic integrals,</li>
<li>linear algebra,</li>
<li>special functions,</li>
<li>polynomial equations,</li>
<li>basic differential equations,</li>
<li>replacements,</li>
<li>arbitrary precision numerics.</li>
</ul>
They differ in the availability and quality of more advanced features. Strong points include<br />
<ul>
<li><i><b>SymPy</b></i>: Generalized hypergeometrics, automatic generation and execution of code, rule-based integration tools (RUBI).</li>
<li><i><b>SageMath</b></i>: Manifolds, number theory, combinatorics.</li>
<li><i><b>FriCAS</b></i>: Antiderivatives, noncommutative algebra, sparse polynomials.</li>
</ul>
The architecture of <i><b>SageMath</b></i> is peculiar, as it includes the other open source systems, plus some modules of its own:<br />
<blockquote>
<i><b>SageMath</b></i> = <i><b>SageMath</b></i>Core <span class="math inline">⊕</span> <i><b>SymPy</b></i> <span class="math inline">⊕</span> <i><b>Maxima</b></i> <span class="math inline">⊕⋯⊕</span> <i><b>FriCAS</b></i></blockquote>
This architecture has obvious advantages and disadvantages, in particular not all features of each included subsystem is available to users in a transparent way.<br />
<br />
The best way to get an idea of the capabilities of a computer algebra system is a live demonstration. We have written a demonstration notebook for <i><b>SymPy</b></i>. It is publicly accessible in <a href="https://www.ipht.fr/Docspht//search/article.php?id=t19/051">two versions</a>: the .ipynb notebook itself, which can be run using Jupyter and <i><b>SymPy</b></i>, and the static <a href="https://www.ipht.fr/Docspht//articles/t19/051/public/demo_notebook_output.html">.html output</a>, which can be viewed in a browser. The end section of the notebook is a collection of useful weblinks on scientific Python.Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com3tag:blogger.com,1999:blog-9119793002820072645.post-22184926963181990932019-02-06T15:51:00.000-08:002019-02-06T23:49:08.790-08:00Solving two-dimensional conformal field theories<div class="unnumbered" id="introduction">
<i>This is the text of my <a href="https://en.wikipedia.org/wiki/Habilitation">habilitation</a> defense, which took place on December 21st 2018. The members of the jury were Denis Bernard, Matthias Gaberdiel, Jesper Jacobsen, Vyacheslav
Rychkov, Véronique Terras, Gérard Watts and Jean-Bernard Zuber. </i></div>
<div class="unnumbered" id="introduction">
<br /></div>
<div class="unnumbered" id="introduction">
<i>In this habilitation defense, I gave a subjective overview of some recent progress in solving two-dimensional conformal field theories. I discussed what solving means and which techniques can be used. I insisted that there is much to discover about Virasoro-based CFTs, i.e. CFTs that have no symmetries beyond conformal symmetry. I claimed that we should start with CFTs that exist for generic central charges, because they are simpler than CFTs at rational central charges, and can nevertheless include them as special cases or limits. Finally, I argued that in addition to writing research articles, we should use various other media, in particular Wikipedia. </i></div>
<div class="unnumbered" id="introduction">
</div>
<h2 class="unnumbered" id="introduction">
</h2>
<h2 class="unnumbered" id="introduction">
Introduction</h2>
<br />
Two-dimensional CFTs are defined by the presence of a Virasoro symmetry algebra. This symmetry is sometimes enough for solving CFTs, and even classifying the CFTs that obey some extra conditions. For example, we can classify CFTs whose spaces of states decompose into finitely many irreducible representations of the Virasoro algebra: they are called minimal models. In some cases, Virasoro symmetry is not enough, but the CFT can nevertheless be solved thanks to additional symmetries. In particular, we can have symmetry algebras that contain the Virasoro algebra.<br />
Let me discuss a few CFTs that I find particularly interesting. I will classify them according to their symmetry algebras, and characterize these algebras by the spins of the corresponding chiral fields. In this notation, the Virasoro algebra is <span class="math inline">\((2)\)</span>, as its generators are the modes of the energy-momentum tensor, which has spin <span class="math inline">\(2\)</span>. The sum of the spins of the generators gives you a rough idea of the complexity of an algebra.<br />
<a name='more'></a><br />
I will now discuss a few symmetry algebras, and a few CFTs with these symmetries, while citing a subjectively selected sample of works. Each CFT has a progress bar indicating how far it is from being solved: again, a subjective assessment.<br />
<ul>
<li>Beyond the Virasoro algebra, we have the affine Lie algebras <span class="math inline">\(\widehat{\mathfrak{sl}}_2\)</span> and <span class="math inline">\(\widehat{\mathfrak{sl}}_3\)</span>, and the well-known <span class="math inline">\(W\)</span>-algebra <span class="math inline">\(W_3\)</span>. The only algebra that is not well-known is the algebra of type <span class="math inline">\((2, 1, 1)\)</span>, which has no name. This is a generalization of both the Virasoro and <span class="math inline">\(\widehat{\mathfrak{sl}}_2\)</span> algebras, which interpolates between them.</li>
<li>Liouville theory is a well-known CFT with a continuous spectrum.</li>
<li>The recently found non-diagonal CFTs involve infinitely many representations of the Virasoro algebra. By non-diagonal I mean that their primary fields can have nonzero spins.</li>
<li>The <span class="math inline">\(H_3^+\)</span> model is based on the affine <span class="math inline">\(\mathfrak{sl}_2\)</span> algebra, it is more complicated than Liouville theory, but we found that you can deduce its solution from Liouville theory. This is the meaning of my upward-pointing arrows: reformulating CFTs in terms of simpler CFTs. Unfortunately, the arrow from the <span class="math inline">\(W_3\)</span> algebra to the affine <span class="math inline">\(\mathfrak{sl}_3\)</span> algebra works only in the critical level limit, this is why it is dashed.</li>
<li>The interpolating CFTs do not have an official name: they interpolate between Liouville theory and the <span class="math inline">\(H_3^+\)</span> model.</li>
</ul>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-1rprzH3k55w/XFtr89Iu8wI/AAAAAAAACL8/6kt26xeKjdQveNzcnQ8ek-n7DU_kHk8HQCLcBGAs/s1600/1901hdr_img-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="990" data-original-width="1221" height="321" src="https://1.bp.blogspot.com/-1rprzH3k55w/XFtr89Iu8wI/AAAAAAAACL8/6kt26xeKjdQveNzcnQ8ek-n7DU_kHk8HQCLcBGAs/s400/1901hdr_img-1.png" width="400" /></a></div>
This leads to the three main messages of the rest of this talk:<br />
<ul>
<li>The progress bars show an emphasis on solving CFTs: I will discuss what this means.</li>
<li>My most recent works appear at the bottom of the diagram, in CFTs with Virasoro symmetry only: there is still much to discover about such CFTs.</li>
<li>All CFTs on the diagram have a parameter: the central charge of the Virasoro algebra. I have not drawn minimal models, which exist for rational central charges only. I will discuss the relations between generic and rational CFTs.</li>
</ul>
The last message about Wikipedia is unrelated, but I will mention it anyway because it is also an important aspect of my work.<br />
<br />
<h2 class="unnumbered" id="it-is-good-to-build-models-it-is-better-to-solve-them">
It is good to build models, it is better to solve them</h2>
<br />
I have been talking of solving CFTs, and I have even drawn progress bars: let me now discuss what this means. Solving a theory means computing observables: we have to say what the observables are, and how we compute them. I will define the observables as<br />
<ul>
<li>the spectrum, i.e. the space of states together with the action of the symmetries on that space,</li>
<li>the correlation functions on the sphere.</li>
</ul>
These observables are particularly adapted to applications to statistical physics. Other choices of observables are possible, in particular non-local observables such as boundaries and defects, entanglement entropy, or integrated correlation functions.<br />
On the other hand, building a CFT means defining the observables, without necessarily computing them. In this Section I will discuss several methods for building and solving two-dimensional CFTs:<br />
<ul>
<li>the Lagrangian method,</li>
<li>topological recursion,</li>
<li>the conformal boostrap:<br />
<ul>
<li>modular invariance,</li>
<li>crossing symmetry:<br />
<ul>
<li>analytic,</li>
<li>semi-analytic,</li>
<li>numerical.</li>
</ul>
</li>
</ul>
</li>
</ul>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-bFnxrBWW5yc/XFtu6MV4MzI/AAAAAAAACMQ/zewMWepsiGIt7wrvNUJefrRvmeTjbPMHgCLcBGAs/s1600/1901hdr_img-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="498" data-original-width="550" height="289" src="https://2.bp.blogspot.com/-bFnxrBWW5yc/XFtu6MV4MzI/AAAAAAAACMQ/zewMWepsiGIt7wrvNUJefrRvmeTjbPMHgCLcBGAs/s320/1901hdr_img-1.png" width="320" /></a></div>
<br />
In the bootstrap method, there no analog of the Lagrangian or spectral curve: we use symmetry and consistency axioms for constraining observables.<br />
In quantum field theory, we commonly use Lagrangians for building models, and also for solving them via perturbative computations of functional integrals. In two-dimensional CFT, there is a variant of the Lagrangian method that leads to exact calculations, called the Coulomb gas method. But this method only works for quantities that obey some sort of momentum conservation: it is good for building some CFTs, but not so good for solving them.<br />
In topological recursion, the basic object is not a Lagrangian but a spectral curve. To an <span class="math inline">\(N\)</span>-point function <span class="math inline">\(\left<\prod_{i=1}^N V_{\Delta_i}(z_i)\right>\)</span>, we can associate the non-commutative spectral curve <span class="math display">\[\begin{aligned}
y^2 = \sum_{i=1}^N\left(\frac{\Delta_i}{(x-z_i)^2}+\frac{\beta_i}{x-z_i}\right) \quad \text{with} \quad [y, x] =1\ .\end{aligned}\]</span> This spectral curve in principle allows you to perturbatively compute the <span class="math inline">\(N\)</span>-point function <a href="https://arxiv.org/abs/1209.3984">[SR + Chekhov + Eynard 2012]</a>. The computations are not very efficient, but the role of the spectral curve is rather to relate CFT with other integrable models such as matrix models. A particularly powerful version of this idea occurs when the central charge is one, which relates correlation functions to solutions of the Painlevé VI differential equation <a href="https://arxiv.org/abs/1207.0787">[Gamayun + Iorgov + Lisovyy 2012]</a>, <a href="https://arxiv.org/abs/1307.4865">[SR + Eynard 2013]</a>.<br />
Let me come to the conformal bootstrap method: the method which I use for actually solving models. The simplest incarnation of the method is called the modular bootstrap. In the modular bootstrap, we use modular invariance of the torus partition function for deriving an equation on the spectrum <span class="math inline">\(S\)</span> alone, i.e. an equation that does not involve correlation functions: <span class="math display">\[\begin{aligned}
\operatorname{Tr}_S e^{2\pi i\tau(L_0-\frac{c}{24})} e^{2\pi i\bar\tau(\bar L_0-\frac{c}{24})}
=
\operatorname{Tr}_S e^{-\frac{2\pi i}{\tau}(L_0-\frac{c}{24})} e^{-\frac{2\pi i}{\bar\tau}(\bar L_0-\frac{c}{24})}\end{aligned}\]</span> where <span class="math inline">\(L_0\)</span> is an element of the Virasoro algebra, <span class="math inline">\(c\)</span> the central charge, and <span class="math inline">\(\tau\)</span> the modulus of the torus. The modular bootstrap has been used successfully for classifying minimal models. However, it is not sufficient for having a consistent CFT, and actually not even necessary, as some CFTs such as generalized minimal models do not exist on the torus. And the torus partition function does not always encode enough information for recovering the space of states. In particular,<br />
<ul>
<li>In Liouville theory the partition function does not even depend on the central charge.</li>
<li>In the Potts model, the spectrum contains indecomposable but reducible representations of the Virasoro algebra, whose structures are not fully determined by the partition function. And their multiplicities in the partition function can be fractional and/or negative.</li>
</ul>
Let me now focus on crossing symmetry of the sphere four-point function, a more complicated equation that involves not only the spectrum <span class="math inline">\(S\)</span>, but also the correlation functions, more precisely the three-point structure constants <span class="math inline">\(C\)</span>:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-v6z0EEXSYpM/XFtwh3fDTTI/AAAAAAAACMk/IIPHsex5w6gVKUFNaVEFjY6XDHOkbZuZQCLcBGAs/s1600/1901hdr_img-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="237" data-original-width="983" height="95" src="https://2.bp.blogspot.com/-v6z0EEXSYpM/XFtwh3fDTTI/AAAAAAAACMk/IIPHsex5w6gVKUFNaVEFjY6XDHOkbZuZQCLcBGAs/s400/1901hdr_img-1.png" width="400" /></a></div>
<br />
The use of crossing symmetry comes in different flavours, depending on how much control we have on the spectrum:<br />
<ul>
<li>In the analytic bootstrap, sums over the spectrum have finitely many nonvanishing terms (due to fusion rules), and structure constants can be determined analytically. This is the case in minimal models, and in all 2d CFTs that appeared in my map.</li>
<li>In what could be called the semi-analytic bootstrap, we know or guess the spectrum, and numerically solve crossing symmetry for the three-point function. We did this in the Potts model in <a href="https://arxiv.org/abs/1607.07224">[SR + Picco + Santachiara 2016]</a>. It was later understood that our spectrum was only an approximation of the Potts model’s spectrum, and that it was actually the spectrum of another non-diagonal CFT.</li>
<li>In the numerial bootstrap, we numerically determine both the spectrum and correlation functions. This flavour of the bootstrap is commonplace in higher than two dimensions. 2d CFTs that were solved analytically provide testing grounds for such techniques.<br />
</li>
</ul>
<h2 class="unnumbered" id="theres-plenty-of-room-at-the-bottom">
“There’s plenty of room at the bottom”</h2>
<br />
Let me turn to the simplest possible CFTs: those that only have a Virasoro symmetry algebra. I will will argue that the space of these Virasoro-CFTs is rich and poorly known. First I should define what a Virasoro-CFT is, as by definition all two-dimensional CFTs have Virasoro symmetry. I do not want to ignore or to break a larger symmetry algebra: I want CFTs whose spectrums are small enough that they have a chance of being solvable. For this I require that the multiplicities of indecomposable representation of the Virasoro algebra in the spectrum are bounded – not just finite, bounded. By comparison, minimal models are defined by the condition that the spectrum is made of finitely many irreducible representations: I am now extending this defintion a lot, but hopefully not too much.<br />
Let me discuss some known Virasoro-CFTs. (The bound on multiplicities is <span class="math inline">\(1\)</span> or <span class="math inline">\(2\)</span> in all my examples.) I will plot them in the complex plane of central charges, as the central charge of the Virasoro algebra determines some important properties of CFTs, starting with their existence.<br />
<ul>
<li>Liouville theory (blue), with its continuous spectrum: the initial Lagrangian definition held for <span class="math inline">\(c\geq 25\)</span>, but the theory can be analytically extended to the whole complex plane minus the half-line <span class="math inline">\((-\infty, 1)\)</span>, and then to that half-line too although not analytically.</li>
<li>Minimal models (green bars) are parametrized by two coprime integers, <span class="math display">\[\begin{aligned}
c_{p, q} = 1 - 6\frac{(p-q)^2}{pq} \quad \text{with} \quad 2\leq p < q
\end{aligned}\]</span> Unitary minimal models form a small subset (<span class="math inline">\(q=p+1\)</span>), and unitarity plays no role in classifying minimal models. For each allowed central charge, there are actually <span class="math inline">\(1,2\)</span> or <span class="math inline">\(3\)</span> distinct minimal models. Larger bars mean fewer representations in the spectrum.</li>
<li>Recently found non-diagonal CFTs with infinite spectrums (non-hatched area) <a href="https://arxiv.org/abs/1711.08916">[SR + Migliaccio 2017]</a>: they exist for <span class="math inline">\(\Re c\leq 13\)</span>. For each <span class="math inline">\(c\neq 1\)</span> we actually have two distinct CFTs, which are related by analytic continuation around <span class="math inline">\(c=1\)</span>. We stumbled upon these CFTs when trying to solve the Potts model <a href="https://arxiv.org/abs/1607.07224">[SR + Picco + Santachiara 2016]</a>, but we now know that they differ from the Potts model.</li>
<li>This brings us to the <span class="math inline">\(Q\)</span>-state Potts model (pink): a CFT with a discrete, non-diagonal, really complicated spectrum. When the parameter <span class="math inline">\(Q\)</span> spans the complex plane, the corresponding central charge spans a nice-looking region: <span class="math display">\[\begin{aligned}
c= 13 - 6\beta^2 -\frac{6}{\beta^2}\quad \text{with} \quad \frac12 \leq \Re \beta^2\leq 1 \quad \text{such that} \quad Q = 4\cos^2\pi \beta^2
\end{aligned}\]</span> In contrast to all other CFTs that I mentioned, the Potts model has little prospect of an analytic solution, although it may be possible to analytically determine the spectrum.</li>
</ul>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-9OtBdVHNyvg/XFtvyQpCQfI/AAAAAAAACMc/8WrjZDbhlrQxSxeEMgPpaXpgga0PUnkDwCLcBGAs/s1600/1901hdr_img-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="690" data-original-width="1186" height="231" src="https://3.bp.blogspot.com/-9OtBdVHNyvg/XFtvyQpCQfI/AAAAAAAACMc/8WrjZDbhlrQxSxeEMgPpaXpgga0PUnkDwCLcBGAs/s400/1901hdr_img-1.png" width="400" /></a></div>
<br />
<br />
For a given central charge, we can have up to <span class="math inline">\(6\)</span> different Virasoro-CFTs. And I did not even include free bosonic CFTs, of which there exist infinitely many at each central charge. Beyond that, there remains much to discover: we know some crossing-symmetric four-point functions that do not belong to any of these CFTs.<br />
<br />
<h2 class="unnumbered" id="generic-cases-are-simpler-than-special-cases">
Generic cases are simpler than special cases</h2>
<br />
Historically, the solving of two-dimensional CFTs has started with rational theories: theories whose spectrums are made of finitely many representations of the Virasoro algebra. But are these really the simplest theories?<br />
To answer this question, let me consider fusion rules of degenerate representations. Let me define a degenerate representation as a representation whose fusion product with any irreducible representations is a sum of finitely many irreducible representations. Let me accept that for generic central charges, there exist two degenerate representations <span class="math inline">\(R_{\langle 2,1\rangle}\)</span> and <span class="math inline">\(R_{\langle 1,2\rangle}\)</span> whose fusion products with any Verma module are sums of two Verma modules: <span class="math display">\[\begin{aligned}
R_{\langle 2,1\rangle} \times V_P = \sum_\pm V_{P\pm \frac{\beta}{2}} \quad , \quad R_{\langle 1,2\rangle} \times V_P = \sum_\pm V_{P\pm \frac{1}{2\beta}}\ .\end{aligned}\]</span> Here the momentum <span class="math inline">\(P\)</span> and parameter <span class="math inline">\(\beta\)</span> are functions of the conformal dimension and central charge respectively, chosen for making fusion products simple. If I assume that the fusion product is associative, then I can deduce that fusion products of degenerate representations are degenerate, and that there exist degenerate representations <span class="math inline">\(R_{\langle r, s\rangle}\)</span> for <span class="math inline">\(r,s\in\mathbb{N}^*\)</span> with the momentums <span class="math inline">\(P_{\langle r,s\rangle} =\frac12(\beta r -\beta^{-1}s)\)</span>. I can also find the fusion products of all these degenerate representations with one another and with Verma modules, in particular <span class="math display">\[\begin{aligned}
R_{\langle r_1,s_1 \rangle} \times R_{\langle r_2,s_2 \rangle} = \sum_{r_3\overset{2}{=}|r_1-r_2|+1}^{r_1+r_2-1}\ \sum_{s_3\overset{2}{=}|s_1-s_2|+1}^{s_1+s_2-1} R_{\langle r_3,s_3 \rangle}\ , \qquad r_i,s_i\in\mathbb{N}^*\ .
\label{rrsr}\end{aligned}\]</span> These are the fusion rules of generalized minimal models, valid at generic central charges.<br />
Now let me consider cases when two degenerate representations coincide: say <span class="math inline">\(R_{\langle r,s\rangle} = R_{\langle p-r, q-s\rangle}\)</span>. This constrains the central charge to be <span class="math inline">\(c_{p,q}\)</span>, and we find a finite set of representations that is closed under fusion: <span class="math display">\[\begin{aligned}
R_{\langle r_1,s_1 \rangle} \times R_{\langle r_2,s_2 \rangle} = \sum_{r_3\overset{2}{=}|r_1-r_2|+1}^{\min(r_1+r_2,2p-r_1-r_2)-1}\ \sum_{s_3\overset{2}{=}|s_1-s_2|+1}^{\min(s_1+s_2,2q-s_1-s_2)-1} R_{\langle r_3,s_3 \rangle}\ , \quad \left\{\begin{array}{l} 1\leq r_i\leq p-1 \\ 1\leq s_i\leq q-1 \end{array}\right.\end{aligned}\]</span> These are the fusion rules of minimal models. They can be deduced from the fusion rules of generalized minimal models, but they are more complicated, as they depend on the central charge via <span class="math inline">\(p,q\)</span>. The structures of degenerate representations are also more complicated in minimal models. However, their structures are not really needed for computing correlation functions: what matters is fusion rules. The situation is similar with larger symmetry algebras such as W-algebras.<br />
So it is tempting to study generic central charges first, and to recover the rational cases as limit. We saw that this could be done at the level of fusion rules, where taking the limit is straighforward although not particularly easy. However, at the level of correlation functions, this can be more subtle for several reasons:<br />
<ul>
<li>The finiteness of the limit can require cancellations of divergences: this happens with conformal blocks in Zamolodchikov’s recursive representation.</li>
<li>Representations can become logarithmic or otherwise more complicated.</li>
<li>There can be several different ways of taking the limit <span class="math inline">\(c\to c_{p,q}\)</span>, due to identities such as <span class="math inline">\(R_{\langle r,s\rangle} = R_{\langle p-r, q-s\rangle}\)</span>.</li>
</ul>
The subtleties are not bugs, but features: they allow rich structures to hide in relatively simple theories that exist at generic central charges <a href="https://arxiv.org/abs/1809.03722">[SR 2018]</a>.<br />
<br />
<h2 class="unnumbered" id="to-reach-many-readers-write-in-wikipedia">
To reach many readers, write in Wikipedia</h2>
<br />
In a few words, I would like to say why writing in Wikipedia is part of my work. Results are useless if they are not communicated effectively, the question is how best to do it. Let me consider a few written media that we can use:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-M5H5UpEcMLo/XFtuJanj_5I/AAAAAAAACMI/qvXQ-itMYFcYZO8HLCDNzUK9AXHeoFRjgCLcBGAs/s1600/1901hdr_img-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="642" data-original-width="897" height="285" src="https://4.bp.blogspot.com/-M5H5UpEcMLo/XFtuJanj_5I/AAAAAAAACMI/qvXQ-itMYFcYZO8HLCDNzUK9AXHeoFRjgCLcBGAs/s400/1901hdr_img-1.png" width="400" /></a></div>
<br />
<br />
By open media I mean texts that are not only publicly available, but also written in a widely readable style. In principle all these media can be useful, depending on what we want to say and to whom. We are certainly biased towards writing research articles, because careers are built on them. This does not prevent us from using other media though.<br />
Among these other media, Wikipedia stands out because everybody reads it, but very few physicists write in it. Or, to reformulate this more positively, whatever you write there will find many readers. This is supported by data: see the <a href="https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org&platform=all-access&agent=user&range=latest-30&pages=Liouville_field_theory|Minimal_models|Two-dimensional_conformal_field_theory|Virasoro_conformal_block">pageview statistics</a> for four Wikipedia articles that I have created or rewritten. The 10-20 daily views of each article mean thousands of yearly readers. And the number of readers is not very sensitive to an article’s quality: I did not see much change after greatly improving the article on Liouville theory. So Wikipedia shapes how two-dimensional CFTs are viewed by people like students or researchers from other fields, whether the articles are good or bad. Making them good is therefore important.<br />
But too few physicists write in Wikipedia. I tried to write good articles, but I would admit that they are probably biased towards my own points of view, you are all welcome to help rectify that.Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-2786181143630968322019-01-30T14:27:00.000-08:002019-01-30T14:29:21.032-08:00The Im-flip condition in the two-dimensional Potts modelI have been using this blog for publishing the reviewer reports that I write for journals, since the journals typically do not publish the reports. However, the new journal SciPost Physics does publish the reports for accepted articles. I have recently reviewed an article by <a href="https://arxiv.org/abs/1808.04380">Gorbenko, Rychkov and Zan</a> for SciPost Physics, and <a href="http://researchpracticesandtools.blogspot.com/2018/09/scipost-two-years-on.html">written about the experience</a>: it would seem that I need not blog about that article, since <a href="https://scipost.org/submissions/1808.04380v2/">my report is already online</a>.<br />
<br />
However, not everything that I have to say on the article made it into the report. I will now write on two calculations that I did: the first one is a test of one of the article’s main predictions in more general cases, the second one is a direct derivation and generalization of a technical result that they obtain in a roundabout way.<br />
<a name='more'></a><br />
<br />
<h4 id="on-the-im-flip-condition">
On the Im-flip condition</h4>
<h4 id="on-the-im-flip-condition">
</h4>
The Im-flip condition is a relation between structure constants and conformal dimensions of the fields, which Gorbenko et al predict as a consequence of their ideas on renormalization group flows. They study these flows in the neighbourhood of the central charge <span class="math inline">\(c=1\)</span>. The flows are driven by the field <span class="math inline">\(V_{(3,1)}\)</span>, which becomes exactly marginal (i.e. has conformal dimension one) at <span class="math inline">\(c=1\)</span>. The condition states that the ratio <span class="math display">\[\rho_{IF}(V) = \lim_{c\to 1}\frac{\Im \Delta(V)}{\left<VVV_{(3,1)}\right>}\]</span> does not depend on the field <span class="math inline">\(V\)</span>. In words, this is the ratio between the imaginary part of the conformal dimension of the primary field <span class="math inline">\(V\)</span>, and a three-point structure constant that involves <span class="math inline">\(V\)</span> and <span class="math inline">\(V_{(3,1)}\)</span>, all taken at <span class="math inline">\(c=1\)</span>.<br />
<br />
In the article, the Im-flip condition is tested for a few fields in the Potts model. Here I would like to explore it for more general fields, which may or may not belong to that model. We do not need to be specific about the model, because the three-point structure constant <span class="math inline">\(\left<VVV_{(3,1)}\right>\)</span> is a universal quantity, and can be computed as a consequence of <span class="math inline">\(V_{(3,1)}\)</span> being a degenerate field with a singular vector at level <span class="math inline">\(3\)</span>. This is the technical result that I will discuss later. For the moment let me focus on the limit of <span class="math inline">\(\Im \Delta(V)\)</span>. To define this limit, we need to specify how the dimension <span class="math inline">\(\Delta(V)\)</span> depends on the central charge. In the Potts model as in minimal models, there are fields of two types:<br />
<ul>
<li>Diagonal fields of the type <span class="math inline">\(V^D_{(r,s)}\)</span>, which has spin zero and conformal dimension <span class="math display">\[\Delta_{(r,s)}= \frac12(1-r^2)b^2 +\frac12 (1-s^2)b^{-2} + 1-rs\]</span> where <span class="math inline">\(r,s\)</span> are fixed i.e <span class="math inline">\(c\)</span>-independent. This includes degenerate fields, for which <span class="math inline">\(r,s\in \mathbb{N}^*\)</span>. The parameter <span class="math inline">\(b^2\)</span> is related to the central charge <span class="math inline">\(c\)</span> by <span class="math display">\[c= 1 + 6\left(b+b^{-1}\right)^2\]</span> Let <span class="math inline">\(b^2=-1+\epsilon\)</span>, then <span class="math inline">\(c=1-6\epsilon^2 + O(\epsilon^3)\)</span> and <span class="math display">\[\Delta_{(r,s)} = \frac12(r-s)^2 -\frac12 \epsilon (r^2-s^2) + O(\epsilon^2)\]</span> If <span class="math inline">\(\epsilon\)</span> is not real, <span class="math inline">\(\Im \Delta_{(r,s)}\)</span> is proportional to the first subleading term of <span class="math inline">\(\Delta_{(r,s)}\)</span>.</li>
<li>Non-diagonal fields <span class="math inline">\(V^N_{(r,s)}\)</span>, with left and right dimensions <span class="math inline">\(\frac12\Delta_{(r,\pm s)}\)</span> (and therefore spin <span class="math inline">\(rs\)</span>). The total dimension is <span class="math inline">\(\frac12(\Delta_{(r,s)}+\Delta_{(r,-s)})\)</span>, whose first subleading term is formally the same as for a diagonal field. (The leading term differs, though.)</li>
</ul>
Therefore, we find <span class="math display">\[\lim_{c\to 1} \Delta(V^N_{(r,s)}) = \lim_{c\to 1} \Delta(V^D_{(r,s)}) \propto r^2-s^2\]</span> As we will see, the structure constant <span class="math inline">\(\left<V^N_{(r,s)}V^N_{(r,s)}V_{(3,1)}\right>\)</span> is also proportional to <span class="math inline">\(r^2-s^2\)</span>, provided <span class="math inline">\(r\pm s\notin\mathbb{Z}\)</span>. So we can normalize the Im-flip ratio such that its value for non-diagonal fields is one, <span class="math display">\[\rho_{IF}(V^N_{(r,s)}) = 1\]</span> The situation is different for diagonal fields: <span class="math display">\[\rho_{IF}(V^D_{(r,s)}) \underset{r-s\notin\mathbb{Z}}{=} \frac{r+s}{r-s}\]</span> This is one if and only if <span class="math inline">\(s=0\)</span>. But then <span class="math inline">\(V^D_{(r,0)}\)</span> is indistinguishable from <span class="math inline">\(V^N_{(r,0)}\)</span>. In the special case <span class="math inline">\(r-s\in\mathbb{N}^*\)</span>, we find <span class="math display">\[\rho_{IF}(V^D_{(r,s)}) \underset{r-s\in\mathbb{N}^*}{=} \frac{r+s}{r-s}\cdot\frac{r-1}{r+1}\]</span> which is one if <span class="math inline">\(s=1\)</span>.<br />
<br />
To summarize, we find that the fields with the correct Im-flip ratio (which we normalized to one) are the non-diagonal fields <span class="math inline">\(V^N_{(r,s)}\)</span> (provided <span class="math inline">\(r\pm s\notin\mathbb{Z}\)</span>), and the diagonal fields of the type <span class="math inline">\(V^D_{(r,1)}\)</span> with <span class="math inline">\(r> 1\)</span>. The particular fields that were studied by Gorbenko et al are all of this type, so they all have the same Im-flip ratio. Remarkably, the diagonal fields in the Potts model are all of the type <span class="math inline">\(V^D_{(r,1)}\)</span>.<br />
<br />
However, according to <a href="https://arxiv.org/abs/1809.02191">Jacobsen and Saleur</a> Section 5.5, some diagonal fields have <span class="math inline">\(r\leq 1\)</span>, and some non-diagonal fields have <span class="math inline">\(r\pm s\in \mathbb{Z}\)</span>: such fields apparently violate the Im-flip condition. This might be due to the logarithmic nature of the offending fields, as logarithmic fields would probably not be subject to the Im-flip condition. Whether the fields are logarithmic or not is however unknown, as this information cannot be deduced from the partition function.<br />
<br />
<h4 id="structure-constants-of-degenerate-fields">
Structure constants of degenerate fields</h4>
<h4 id="structure-constants-of-degenerate-fields">
</h4>
We want to compute structure constants of the type <span class="math inline">\(\left<VVV_{(3,1)}\right>\)</span>. The trick, sometimes called <a href="https://arxiv.org/abs/hep-th/9507109">Teschner’s trick</a>, is to consider four-point correlation functions that involve the degenerate field <span class="math inline">\(V_{(2,1)}\)</span>. Let us assume we want to compute the structure constants <span class="math display">\[c_{(2,1)} = \left< V_{(2,1)} V_{(2,1)} V_{(3,1)} \right> \quad , \quad c_{(3,1)} = \left< V_{(3,1)} V_{(3,1)} V_{(3,1)} \right>\]</span> We normalize the fields so that <span class="math inline">\(\left<V_{(1,1)} V V \right>=1\)</span> for any field <span class="math inline">\(V\)</span>. Let us write the s-channel decompositions of the following four-point functions: <span class="math display">\[\left<V_{(2,1)} V_{(2,1)} V_{(2,1)}V_{(2,1)} \right> = \left|\mathcal{F}_-\right|^2 + c_{(2,1)}^2 \left|\mathcal{F}_+\right|^2\]</span> <span class="math display">\[\left<V_{(2,1)} V_{(2,1)} V_{(3,1)}V_{(3,1)} \right> = \left|\mathcal{G}_-\right|^2 + c_{(2,1)}c_{(3,1)} \left|\mathcal{G}_+\right|^2\]</span> where <span class="math inline">\(\mathcal{F}_\pm,\mathcal{G}_\pm\)</span> are some hypergeometric conformal blocks. Single-valuedness of these four-point functions determines <span class="math inline">\(c_{(2,1)}^2\)</span> and <span class="math inline">\(c_{(2,1)}c_{(3,1)}\)</span>. They are given by eq. (2.3.46) of <a href="https://arxiv.org/abs/1406.4290">my review article</a>, where <span class="math inline">\(A,B, C\)</span> are given by (2.3.32) with <span class="math inline">\(\alpha_{(2,1)}=-\frac{b}{2}\)</span> and <span class="math inline">\(\alpha_{(3,1)}=-b\)</span>: <span class="math display">\[c_{(2,1)}^2 = \frac{\gamma(-b^2)\gamma(-1-3b^2)}{\gamma(-2b^2)\gamma(-1-2b^2)} \quad ,\quad c_{(2,1)}c_{(3,1)} = \frac{\gamma(-b^2)^2 \gamma(1+2b^2)\gamma(-1-4b^2)}{\gamma(-2b^2)\gamma(-1-2b^2)}\]</span> This leads to <span class="math display">\[c_{(3,1)} = 2^{-2-4b^2}(1+2b^2)\gamma(\tfrac12+b^2)\gamma(-\tfrac12-2b^2)\sqrt{-\frac{\gamma(-b^2)}{\gamma(-1-3b^2)}}\]</span> where we used the formulas <span class="math inline">\(\gamma(x+1)=-x^2\gamma(x)\)</span> and <span class="math inline">\(\gamma(2x)= 2^{4x-1}\gamma(x)\gamma(x+\tfrac12)\)</span>. In particular, for <span class="math inline">\(c=1\)</span> i.e. <span class="math inline">\(b^2=-1\)</span>, we have <span class="math display">\[c_{(2,1)}\underset{c=1}{=}\frac{\sqrt{3}}{2} \quad , \quad c_{(3,1)} \underset{c=1}{=} \frac{4}{\sqrt{3}}\]</span> Let us generalize the calculation to diagonal fields <span class="math inline">\(V_\alpha\)</span>, and compute the structure constants <span class="math inline">\(c_\alpha = \left<V_\alpha V_\alpha V_{(3,1)}\right>\)</span>. We find <span class="math display">\[c_{(2,1)}c_\alpha = \frac{\gamma(-b^2)^2 \gamma(1-2b\alpha)\gamma(-1-2b^2+2b\alpha)}{\gamma(-2b^2)\gamma(-1-2b^2)}\]</span> In the generic case where <span class="math inline">\(\gamma(1-2b\alpha)\)</span> and <span class="math inline">\(\gamma(-1-2b^2+2b\alpha)\)</span> stay finite, the limit is simply <span class="math display">\[c_{(2,1)}c_\alpha \underset{c=1}{=} -\alpha^2 = \frac12 \Delta_\alpha\]</span> In particular this formula works for the spin field, whose dimension is <span class="math inline">\(\Delta_{(\frac12,0)}\underset{c=1}{=}\frac18\)</span>, so that <span class="math inline">\(c_{(\frac12,0)} \underset{c=1}{=} \frac{1}{8\sqrt{3}}\)</span>. Things are a bit more complicated if these two gamma factors go to zero or infinity. In particular, if <span class="math inline">\(\alpha = \alpha_{(r,s)} = \frac12((1-r)b+(1-s)b^{-1})\)</span> where <span class="math inline">\(r,s\)</span> are fixed numbers such that <span class="math inline">\(r-s\in\mathbb{Z}\)</span>. By invariance under <span class="math inline">\((r,s)\to (-r,-s)\)</span>, we assume <span class="math inline">\(r>s\)</span>, and find <span class="math display">\[\renewcommand{\arraystretch}{1.3}
c_{(2,1)} c_{(r,s)} \underset{c=1}{=} \frac12\Delta_{(r,s)} \frac{r+1}{r-1}\]</span> In particular this reproduces our previous formulas for <span class="math inline">\(c_{(2,1)},c_{(3,1)}\)</span>.<br />
For a non-diagonal field with left and right dimensions <span class="math inline">\(\frac12(\Delta\pm S)\)</span>, we only need to take the <a href="https://arxiv.org/abs/1711.08916">geometric mean</a> of the results for two diagonal fields of dimensions <span class="math inline">\(\frac12(\Delta\pm S)\)</span>. In particular, for a field <span class="math inline">\(V^N_{(r,s)}\)</span> whose left and right dimensions are of the type <span class="math inline">\(\frac12\Delta_{(r,\pm s)}\)</span> with <span class="math inline">\(r\pm s\notin \mathbb{Z}\)</span>, the result is <span class="math display">\[c_{(2,1)}c^N_{(r,s)} \underset{c=1}{=} \frac12 (r^2-s^2)\]</span>Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-56446150176235063492019-01-13T15:36:00.000-08:002019-01-13T15:38:54.907-08:00Plan S implementation: in danger of mission creep?The principles behind plan S have already sparked lots of debate, including an open letter denouncing the plan, based on objections that I found <a href="http://researchpracticesandtools.blogspot.com/2018/11/how-strong-are-objections-to-plan-s.html">not very convincing</a>. Now that the plan’s promoters have published their <a href="https://www.coalition-s.org/feedback/">draft implementation guidance</a> (and are inviting comments on it), the discussion can become more specific. Given the boldness of the principles, their implementation cannot be painless, and is bound to raise criticisms if not resistance. It is therefore both crucial and difficult to get the implementation right, and to inflict the minimum amount of pain that is necessary for achieving the goals.<br />
<br />
This means that the implementation should be tightly focussed on open access and cost reduction, to the exclusion of other possible goals that a reform of the publishing system might have. I will discuss whether the draft implementation guidance is indeed focussed enough. But first, let me be more specific about what a minimal implementation might look like.<br />
<a name='more'></a><br />
<h4 id="sketching-a-minimal-implementation.">
Sketching a minimal implementation</h4>
<h4 id="sketching-a-minimal-implementation.">
</h4>
As I understand it, Plan S is about eliminating artificial restrictions to the distribution and reuse of scientific works, and the artificially high costs that come with such restrictions. To do this, it is necessary and sufficient that<br />
<ul>
<li>articles are distributed under an appropriately permissive license,</li>
<li>paywalls are eliminated.</li>
</ul>
Both items are necessary: eliminating paywalls is not enough for allowing text and data mining, and mandating a permissive license for a number of articles is not enough for reducing costs, so long some indispensible articles remain accessible through subscriptions only. Together, these two items are sufficient for forcing journals to earn their money by doing actual work on articles, rather than just acting as gatekeepers.<br />
<br />
Therefore, basically, the task is to eliminate perverse practices, not to mandate virtuous ones. Eliminating paywalls is the hard part: funders can mandate researchers to use permissive licenses, but they cannot directly mandate journals to tear down their paywalls. Moreover, there is the difficulty of formulating rules that cannot be evaded by clever publishers. For instance, the implementation guidance includes a clause forbidding mirror journals (paragraph 9.1), closing a potential loophole in the ban on hybrid journals. Still, it is surprising that the implementation guidance mandates so many detailed requirements. Let me examine whether these requirements are really justified, or whether their authors ceded to the temptation of aiming for more than their original goals.<br />
<br />
<h4 id="temptation-1-focussing-on-quality.">
Temptation 1: Focussing on quality</h4>
<h4 id="temptation-1-focussing-on-quality.">
</h4>
The quality of articles and of research itself has little to do with open access, and yet when a reform like Plan S is proposed, there is the temptation to discuss its possible effects on quality. The basic objection that reducing publishing costs would sacrifice quality is of course nonsense: the large costs of journals are ultimately due to the dysfunctional economics of a system where authors choose journals without worrying at all about costs.<br />
<br />
Nevertheless, the implementation guidance includes a strong focus on quality, starting with its Point 4: “Supporting Quality Open Access Journals and Platforms”. And in Point 9.1, among the basic mandatory criteria for journals and platforms, we have a “solid system for review according to the standards within the relevant discipline”.<br />
<br />
There is no reason for bundling open access with quality control: a quality-neutral flip to open access would already be great progress. Moreover, translating the focus on quality into concrete guidelines inevitably leads to mandating a specific quality-control mechanism, namely peer review. But who knows which other forms of quality control could emerge? For example, in an open system, text and data mining could let algorithms play an important role in quality control. And a crowdsourced quality control mechanism (cf StackExchange or Wikipedia) could conceivably be applied to scientific articles.<br />
<br />
<h4 id="temptation-2-achieving-funders-dreams.">
Temptation 2: Achieving funders’ dreams</h4>
<h4 id="temptation-2-achieving-funders-dreams.">
</h4>
Since Plan S emanates from a coalition of funders, there is the natural temptation to include requirements that seem convenient or desirable to funders, without having anything to do with open access. The obvious example is (Point 9.2) that “Metadata must include complete and reliable information on funding provided by cOAlition S funders”. It would be far-fetched to justify this clause from the need to track compliance with the open access mandate.<br />
<br />
Moreover, Point 9.2 also mandates that publishers give details on their costs. While we certainly need transparent prices, transparent costs sound like a bad idea. First, costs are hard to define and to compute with any precision. Second, in an open publishing system, publishers compete on price (among other factors), and cost control should come from that competition. Finally, if Elsevier was charging only 500 euros per article rather than 5000, why should we care if they made 40% profit?<br />
<br />
It is understandable that funders want to promote their good works, to streamline their operations, and to see what they get for their money. However, Plan S should not be used to these ends.<br />
<br />
<h4 id="temptation-3-codifying-best-practices.">
Temptation 3: Codifying best practices</h4>
<h4 id="temptation-3-codifying-best-practices.">
</h4>
The mandatory quality criteria (Point 9.2) read more like an attempt to codify and standardize today’s best practices, than like a minimal implementation of principles. Almost none of the criteria seems indispensible for achieving open access and eliminating paywalls. Actually, assuming that articles are published under a CC-BY license, requirements like having texts in machine-readable formats or having them archived for long-term preservation seem moot, as the license allows third parties to convert, archive and exploit the texts.<br />
<br />
Moreover, an underlying assumption of some of these requirements is the existence of a reference version of the texts, typically the published version. But other publishing models are emerging: from living reviews that can always be modified and may not have final versions, to arXiv preprints that are never sent to any journal, to Wikipedia articles that may not have clearly defined authors. Rigid guidelines risk thwarting the evolution of the scientific article, and today’s best practices can become tomorrow’s bad habits.<br />
<br />
<h4 id="mission-creep-and-its-consequences.">
Mission creep and its consequences</h4>
<h4 id="mission-creep-and-its-consequences.">
</h4>
Going beyond what is strictly necessary to achieve its goals, Plan S opens itself to legitimate criticism that it asks too much from journals and platforms, which inevitably disfavours small or emergent players. The Coalition of Open Access Repositories has formulated <a href="https://www.coar-repositories.org/activities/advocacy-leadership/open-science-and-sustainable-development/">criticism of this kind</a>, and Angela Cochran too has raised <a href="https://scholarlykitchen.sspnet.org/2018/12/07/plan-s-a-mandate-for-gold-oa-with-lots-of-strings-attached/">many valid points</a>.<br />
<br />
Overreaching at the implementation stage could jeopardize the very success of Plan S. And even if the plan succeeded, it would replace the current system with a better but imperfect system that would be too rigid for having good prospects of further improvement. Fortunately, the implementation guidance is open to feedback, and there is some hope that Plan S can still be refocussed on its core mission.Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-14303550486739277182018-11-17T15:09:00.000-08:002018-11-19T12:17:11.866-08:00How strong are the objections to Plan S?After a coalition of European science funding agencies announced their Plan S initiative for open access, a number of researchers wrote an open letter criticizing the move, under the title <a href="https://sites.google.com/view/plansopenletter/open-letter">“Reaction of Researchers to Plan S: Too Far, Too Risky”</a>. To summarize, they fear that Plan S would increase costs, lower quality, and restrict academic freedom. In order to evaluate how seriously these fears should be taken, let me start with a 5-point analysis of the issues, before discussing the open letter’s specific concerns.<br />
<br />
<h4 id="point-1-traditional-journals-are-overpriced-by-an-order-of-magnitude.">
Point 1: Traditional journals are overpriced by an order of magnitude.</h4>
<h4 id="point-1-traditional-journals-are-overpriced-by-an-order-of-magnitude.">
</h4>
Overall, publishers earn about <a href="https://www.scienceguide.nl/2017/09/what-is-the-price-per-article/">3800-5000</a> euros per article they publish. The true costs of organizing peer review are <a href="https://www.nature.com/news/open-access-the-true-cost-of-science-publishing-1.12676">much lower</a>: a few hundred euros per article at SciPost or PeerJ. The difference is therefore not mainly due to commercial publishers’ profits, but rather to large inefficiencies at all traditional publishers. (Have a look at the <a href="https://www.comparably.com/companies/american-chemical-society/executive-salaries">salaries</a> of the American Chemical Society’s executives.)<br />
<br />
Some publishers also use the money for subsidizing other activities, such as conferences (learned socities) or science journalism (Nature, Science). While these activities are valuable, they are not the main purpose of scientific publishing, and should not determine its future.<br />
<a name='more'></a><br />
<h4 id="point-2-no-open-access-no-affordable-journals.">
Point 2: No open access, no affordable journals.</h4>
<h4 id="point-2-no-open-access-no-affordable-journals.">
</h4>
The inefficiencies and overpricing of traditional publishers can seem puzzling: with so many journals and publishers around, how can the competition allow this? However, in a subscription-based system, you have no effective competition: when you need to read a given article, you cannot look elsewhere for a substitute. Authors choose where to publish, readers pay: no competition. With Gold open access, authors choose and pay, so prices can play a role in their choices. Actually, any form of true open access eliminates each publisher’s effective monopoly over its articles, and allows prices to decrease.<br />
<br />
(I write true open access in order to exclude delayed open access after an emabargo: this only makes the monopolies temporary, rather than eliminating them. Delayed open access should not be identified with Green open access, which <a href="http://researchpracticesandtools.blogspot.com/2018/03/the-open-secrets-of-life-with-arxiv.html">can exist in the absence of embargos</a>.)<br />
<br />
Moreover, an open access publishing model should cost less than a subscription system, if only because building and defending paywalls is costly. Publishers sometimes pretend otherwise, and try to earn extra money for making articles openly accessible, a tactic called double dipping. That this tactic works is proof enough that the existing system is perverse and needs radical reform.<br />
<br />
<h4 id="point-3-when-it-comes-to-quality-the-existing-system-performs-poorly.">
Point 3: When it comes to quality, the existing system performs poorly.</h4>
<h4 id="point-3-when-it-comes-to-quality-the-existing-system-performs-poorly.">
</h4>
There is widespread perception that scientific research is undergoing a “reproducibility crisis”, and that <a href="https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124">most published studies are false</a>. An important part of the problem is that articles are often <a href="https://sfdora.org/">judged by the journals they appear in</a>. And some of the most prestigious journals, such as Science and Nature, follow criteria that are <a href="https://researchpracticesandtools.blogspot.com/2018/01/germany-wont-pay-for-natures-scientific.html">more journalistic than scientific</a> when selecting articles.<br />
While this is not directly related to open access (or the lack thereof), this should be kept in mind when contemplating reforms such as Plan S. We are not talking of improving a well-functioning system, but of reforming a dysfunctional system that may not be self-correcting.<br />
<br />
<h4 id="point-4-researchers-are-prisoners-of-the-system-and-most-of-them-dont-care.">
Point 4: Researchers are prisoners of the system, and most of them don’t care.</h4>
<h4 id="point-4-researchers-are-prisoners-of-the-system-and-most-of-them-dont-care.">
</h4>
Researchers are used to working for journals, where they publish without remuneration, and work as peer reviewers and editors, most often for free. A priori there is nothing wrong with that, as researchers do not count on journals for earning their money. The system becomes objectionable when journals abuse their position, in particular when they require researchers to give away copyright to their works, and proceed to strongly enforce said copyright in their own favour.<br />
<br />
A particularly draconian case is brought to us by Elsevier, the notoriously rapacious publisher: Elsevier has so far refused to join the <a href="https://i4oc.org/">Initiative for Open Citations</a>, a form of data opening that is manifestly not a threat to existing business models. Of course, Elsevier has its own reasons, more related to its ambition of controlling research data and workflow, than to its legacy publishing business.<br />
<br />
Elsevier’s behaviour has made it the target of the most widespread <a href="http://thecostofknowledge.com/">boycott</a> of an academic publisher to date. This boycott has been joined by only 17000 researchers, and has had little noticeable effect. While giving away one’s copyright is often unavoidable, working for Elsevier as a reviewer can easily be renounced without harming one’s career. Boycotting Elsevier as a reviewer is the least a researcher can do, if he is concerned about how academic publishing works. The relative failure of the boycott demonstrates that the vast majority of researchers are not concerned.<br />
<br />
<h4 id="point-5-progress-is-slow-and-may-well-stop.">
Point 5: Progress is slow, and may well stop.</h4>
<h4 id="point-5-progress-is-slow-and-may-well-stop.">
</h4>
After decades of efforts, the open access movement has achieved relatively little in terms of making articles openly accessible, and nothing in terms of reducing costs. One may be tempted to conclude that the endeavour is futile and the cost reductions illusory. However, the success of the pirate open access website Sci-Hub <a href="http://bjoern.brembs.net/2016/02/sci-hub-as-necessary-effective-civil-disobedience/">shows otherwise</a>, by giving a glimpse of a possible universal open access future, and exerting a strong downward pressure on subscription prices. Thanks to Sci-Hub, it is indeed possible to conveniently access the literature without paying a subscription, and this has tremendously helped consortiums such as the German DEAL or the French Couperin in their negotiations with publishers.<br />
<br />
Inevitably, Sci-Hub is under legal attack from publishers, and so is the social networking site <a href="https://www.nature.com/articles/d41586-018-06945-6">ResearchGate</a>, where researchers share their article. The American crackdown on Sci-Hub involves drastic measures that <a href="https://www.eff.org/deeplinks/2017/11/another-court-overreaches-site-blocking-order-targeting-sci-hub">target not only Sci-Hub itself, but also a large number of Internet intermediaries</a>: academic publishers such as the American Chemical Society and Elsevier are at the forefront of establishing a legal censorship regime for the Internet.<br />
<br />
In Europe, the establishment of censorship rather comes from the news and entertainment industry, and takes the shape of <a href="https://www.theverge.com/2018/9/12/17849868/eu-internet-copyright-reform-article-11-13-approved">Articles 11 and 13 of the draft copyright directive</a>. The principles behind these articles (taxing links and filtering uploads) would destroy arXiv, GitHub and Wikipedia. Rather than narrowing these principles to news and entertainment, the directive’s authors have carved exceptions that will save today’s existing platforms – but not possible open platforms that do not exist yet.<br />
<br />
Therefore, the present conjoncture is as good as it will get for a transition to open access, with Sci-Hub widely accessible, and Internet relatively open. The conjoncture is likely to get worse, possibly quickly. This may be a now-or-never moment.<br />
<br />
<h4 id="plan-s.">
Plan S.</h4>
<h4 id="plan-s.">
</h4>
In this context comes <a href="https://en.wikipedia.org/wiki/Plan_S">Plan S</a>: a coordinated attempt by research funding bodies to induce a flip to open access in the publishing system. Previously, individual funding bodies were content to mandate that the research they fund be openly accessible: it is now apparent that such mandates can achieve their limited goals, but neither change the system nor decrease publication costs. This is the rationale for the more radical features of Plan S, in particular banning hybrid journals.<br />
<br />
I will now discuss the various objections to Plan S from the <a href="https://sites.google.com/view/plansopenletter/open-letter">open letter</a>.<br />
<br />
<h4 id="objection-0-plan-s-goes-too-fast.">
Objection 0: Plan S goes too fast.</h4>
<h4 id="objection-0-plan-s-goes-too-fast.">
</h4>
Plan S aims to be implemented by 2020, which can seem frightfully soon for such a radical flip. However, there is nothing technically complicated in creating a new open access journal, flipping an existing journal to an open access model, or creating a new preprint archive. All this has been done many times before: in the hybrid system we now have, most players already have the required technical infrastructure. What prevents publishers from flipping is their understandable desire to milk the subscription cow for as long as possible.<br />
<br />
A gradual transition could a priori appear smoother and less disruptive: but the transitional situation has been going on for decades, and is in danger of becoming permanent. And in the transitional situation, double dipping can make costs not only higher than in a pure open access situation, but also higher than in the traditional subscription model. Moreover, the open letter rightly complains that researchers who are subject to Plan S could be disfavoured in the prevailing perverse system of career incentives: and indeed there is a kind of chicken-or-egg problem with flipping both the publishing system and the assessment system. But making the flip more gradual only worsens this problem.<br />
<br />
So there are good reasons for going fast, even before taking into account Point 5: that the present conjoncture is good, but may well degrade quickly.<br />
<br />
<h4 id="objection-1-the-complete-ban-on-hybrid-society-journals-of-high-quality-is-a-big-problem.">
Objection 1: The complete ban on hybrid (society) journals of high quality is a big problem.</h4>
<h4 id="objection-1-the-complete-ban-on-hybrid-society-journals-of-high-quality-is-a-big-problem.">
</h4>
For Plan S, the ban on hybrid journals is a feature, not a bug: it amounts to burning the bridges to the subscription system, and signals a commitment to a complete flip to open access. Hybrid journals allow double dipping: the reason why a partial flip to open access does not reduce costs, although a complete flip would.<br />
<br />
The problem is that this outlaws many existing journals. For an individual researcher in the existing publication and assessment system, to publish open access while renouncing hybrid journals can mean career suicide. However, raising this objection means ignoring that Plan S is a drive to change the whole system, by a globally significant coalition of funders. This objection assumes that existing journals and assessment methods do not change: in other words, that Plan S fails. This objection therefore depends Objection 2, which predicts such a failure.<br />
<br />
But again, it would be quite easy for existing high-quality journals to become compliant with Plan S. Failing that, creating new high-quality open access journals is nowadays difficult but not impossible, as the examples of SciPost (physics) or PeerJ (biology) show. This will become much easier in the context of Plan S, with many good researchers having to renounce their usual journals and seek other venues. It is not reasonable to complain about the lack enough high-quality open access journals, before the advent of strong incentives for creating them.<br />
<br />
The most serious objection to the ban on hybrid journals is actually tactical: that the ban could be circumvented by creating <a href="https://forbetterscience.com/2018/10/22/robert-jan-smits-scholarly-societies-will-have-to-bite-the-bullet-and-go-open-access/">mirror journals</a>.<br />
<br />
<h4 id="objection-2-we-expect-that-a-large-part-of-the-world-will-not-fully-tie-in-with-plan-s.">
Objection 2: We expect that a large part of the world will not (fully) tie in with Plan S.</h4>
<h4 id="objection-2-we-expect-that-a-large-part-of-the-world-will-not-fully-tie-in-with-plan-s.">
</h4>
The scenario is now that Plan S would fragment the research community. Part of Europe would adopt a Plan S-compliant publishing system, and the rest of the world would keep the existing system. They would ignore the Plan S-compliant journals, ruining the careers of researchers who publish there.<br />
<br />
This piece of dystopian fiction first assumes that Plan S fails to induce a general flip of the publishing landscape towards open access. The second assuption is that there is no progress on research assessment, that the San Francisco <a href="https://sfdora.org/">Declaration on Research Assessment</a> is widely ignored (including by its signatories), and that researchers continue to be evaluated according to the journals they publish in. And the third assumption is that Plan S-compliant journals will not acquire the necessary prestige.<br />
<br />
While the likelihood of the first two assumptions is hard to evaluate, the third is certainly not plausible. With the involvement of the Europe’s most prestigious research funder (the ERC), Plan S-compliant publishing is set to play a major role in Europe. And there is no reason why the rest of the world would ignore most European research, just because it does not appear in the usual journals. A fragmented publishing system does not imply a fragmented research community.<br />
<br />
<h4 id="objection-3-the-total-costs-of-scholarly-dissemination-will-likely-rise-instead-of-reduce-under-plan-s.">
Objection 3: The total costs of scholarly dissemination will likely rise instead of reduce under Plan S.</h4>
<h4 id="objection-3-the-total-costs-of-scholarly-dissemination-will-likely-rise-instead-of-reduce-under-plan-s.">
</h4>
This assertion runs contrary to the elementary economics of the game (see Points 1 and 2), and the open letter does not provide a basis for it.<br />
<br />
<h4 id="objection-4-plan-s-ignores-the-existence-of-large-differences-between-different-research-fields.">
Objection 4: Plan S ignores the existence of large differences between different research fields.</h4>
<h4 id="objection-4-plan-s-ignores-the-existence-of-large-differences-between-different-research-fields.">
</h4>
Since Plan S means a rapid transition to open access, it would of course have more effect on fields that are less advanced in this respect. But it is not clear why some fields need less open access, or a longer transition. Plan S is a rather minimal set of principles for ensuring true open access and eliminating some of the abuses of the existing publishing system: this leaves much room for different open access models. By adopting best practices from other fields without waiting, fields such as chemistry would be spared agonizing hesitations and transitions.<br />
<br />
<i>Note added Nov. 19th</i>: In a comment to this post, Leonid Schneider points out that the EMBO journal "employs full time data experts to analyse images for manipulations". This is an example of a field-specific, justified expense, that goes beyond the costs of the lean journals that I cited in Point 1. While this may not justify the journal's <a href="http://emboj.embopress.org/about">5200$ APC,</a> it would be good to take such costs into account when implementing Plan S.<br />
<br />
<br />
<h4 id="objection-5-plan-s-is-a-serious-violation-of-academic-freedom.">
Objection 5: Plan S is a serious violation of academic freedom.</h4>
<h4 id="objection-5-plan-s-is-a-serious-violation-of-academic-freedom.">
</h4>
Freedoms come with purposes, responsibilities, and limits. You are not free to sell yourself into slavery, to work for less than the minimum wage, or to take dangerous drugs, for the good reason of protecting yourself and others. Similarly, giving away your copyright to publishers, putting your work behind paywalls, and playing the game of a perverse assessment system, should not be allowed in the name of academic freedom. For a more detailed discussion of academic freedom in the context of Plan S, see Marc Couture’s <a href="https://poynder.blogspot.com/2018/11/plan-s-and-researchers-rights-reframing.html">guest post</a> on Richard Poynder’s blog.<br />
<br />
<h4 id="dont-complain-act">
Don’t complain, act!</h4>
<h4 id="dont-complain-act">
</h4>
Why does the open letter worry so much about the “ranking and standing” of Plan S-subjected researchers, when the debate is about journals? The letter’s authors seem to accept the entrenched practice of judging researchers by the journals they publish in, although this is widely denounced as perverse. But it is not possible to significantly reform the publishing system without upsetting this practice, at least temporarily. If careers continue to be determined by numbers of articles in Nature or Science, then it is game over for open access and affordable publishing.<br />
<br />
When it comes to open science, chemistry seems to lag behind fields such as physics and biology. The field’s leading publisher, the American Chemical Society, did not even join the <a href="https://en.wikipedia.org/wiki/Initiative_for_Open_Citations">Initiative for open citations</a>. But chemists could welcome the opportunity to catch up. If you had no telephone and were offered a mobile phone, would you insist on installing a landline first?<br />
<br />
The open letter has hundreds of signatories. Surely one could find among them enough well-respected researchers for building the editorial board of a new open access, affordable, high quality, generalist chemistry journal. They would not even need to do it from scratch: they could just start a new division of PeerJ or SciPost. Assuming of course that they really support open access, as the open letter claims in its first sentence.Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-66284522015805286262018-10-27T09:50:00.000-07:002018-11-24T12:09:57.229-08:00CNRS rejects Couperin's claimed victory in Springer big dealAfter <a href="http://researchpracticesandtools.blogspot.com/2018/03/springer-goes-rogue.html">long and tortuous negotiations</a>, the French consortium Couperin has <a href="https://www.couperin.org/site-content/261-a-la-une/1358-springer-nature-accepte-une-baisse-de-tarif-pour-l-achat-de-ses-revues-prealable-a-tout-accord-avec-le-consortium">claimed victory</a> in its recent agreement with Springer, after having secured price decreases. This claim seems reasonable, as prices of big deals with publishers tend to increase steadily. Of course, critics can still point out that Springer remains very expensive compared to smaller, more efficient publishers. But at least Springer seems amenable to some compromises in negotiations. And one should not forget that the greediest and most obnoxious publisher remains Elsevier, who even refused to join the <a href="https://en.wikipedia.org/wiki/Initiative_for_Open_Citations">Initiative for open citations</a>.<br />
<br />
I was therefore surprised when CNRS announced its rejection of the Springer deal, although CNRS takes part in Couperin and was actively involved in the negotiations. The email announcement came from Alain Schuhl, a CNRS official who is also a member of Couperins’ governing council. (See the email and its English translation below.) This email was a warning to CNRS researchers that access to Springer journals was now cut off. However, articles from 2017 and earlier are still available, as they are coverd by the previous subscription.<br />
<br />
The explanation for the rejection is that the deal’s price was too high according to the Ministry’s <a href="http://m.enseignementsup-recherche.gouv.fr/cid132529/le-plan-national-pour-la-science-ouverte-les-resultats-de-la-recherche-scientifique-ouverts-a-tous-sans-entrave-sans-delai-sans-paiement.html">open science plan</a>. This explanation makes little sense for two reasons:<br />
<a name='more'></a><br />
<ul>
<li>The plan was announced in July 2018, whereas the Springer negotiations were supposed to be concluded by the end of 2017. The negotiating mandate and strategy could therefore not take the plan into account.</li>
<li>The plan says much about open science, but nothing about costs. The email is all about costs, and does not mention open science.</li>
</ul>
The email also alludes to the ongoing negotiations of Couperin with Elsevier for a 2019-2021 deal, suggesting that the main reason for the rejection might be to show determination to Elsevier. But to survive without a Springer subscription tells little on CNRS’s ability to survive without an Elsevier subscription, for several reasons:<br />
<ul>
<li>CNRS researchers often work in institutions such as universities, where they can access publications without resorting to CNRS’s subscription. Elsevier tries much harder than Springer to prevent this kind of double-dipping from subscribers.</li>
<li>In some subfields of medicine, chemistry and engineering, Elsevier journals are hard to ignore.</li>
</ul>
Nevertheless, when it comes to Elsevier, <a href="https://www.timeshighereducation.com/blog/open-access-germany-best-deal-no-deal#survey-answer">the best deal is no deal</a>, as the German and Swedish examples demonstrate. Recent experience suggests that Elsevier will not consent to a deal comparable to Springer’s, let alone accede to the more ambitious demands of the Couperin consortium. (These demands include much about open access, not just price.) The only way to have some hope of a tolerable deal, is actually to prepare for a no deal scenario. This would imply warning researchers about the likely suspension of access to Elsevier journals. But CNRS is not doing this at the moment. Surely it would be pointless to make a show of symbolic and painless determination towards the relatively reasonable Springer, while preparing to accept a less palatable deal with the tougher Elsevier.<br />
<br />
The rejection of the deal by CNRS also raises the issue of Couperin’s usefulness. The reconfiguration, or even disappearance, of the consortium might actually not be a bad thing, as the consortium’s large size makes it less able to withstand a no deal scenario, and to agree on a strategy. Agreeing on a strategy can be difficult: for example, having authors pay (rather than readers) would move costs from some institutions to some others. But having a strategy that involves open access is probably necessary, even if the aim is only to lower costs.<br />
<br />
<h3 class="unnumbered" id="email-announcement-english-translation">
Email announcement: English translation</h3>
<h3 class="unnumbered" id="email-announcement-english-translation">
</h3>
Dear colleagues,<br />
<br />
As you may have noticed, access to 2018 issues of Springer journals has been partly suspended on the BibCNRS portal by Springer. This suspension affects all institutions that did not (or not yet) renew their subscriptions in 2018, following Springer’s offer [to the Couperin consortium] for the period 2018-2020. These institutions include CNRS.<br />
<br />
Two weeks ago, Couperin’s negotiations with Springer have led to an offer by Springer that includes, for the first time, a steady decrease of subscription costs over 3 years, but covers a slightly reduced list of journals. CNRS acknowledges the negotiatiors’ work.<br />
<br />
However, CNRS has decided to reject Springer’s offer, and has asked for a new offer. Springer’s offer indeed comes in the context of a transformation of scientific publishing’s economic model. (Scientific publishers’ yearly revenues for France are almost 100 million euros.)<br />
<br />
For several years, scientific communities have been involved in an endeavour to make scientific publications accessible at a “fair price”. Moreover, the French minister of Research has announced a plan for “open science” on July 4th, 2018. This plan aims to reduce the budgets for contracts with publishers. We believe that Springer’s offer is still too expensive for being compatible with this plan’s goals.<br />
<br />
We are fully conscious of the difficulties that are caused by this suspension. But we are convinced that a firm stance towards publishers is needed for bringing costs back to reasonable levels. This is all the more important now that difficult negotiations with Elsevier have just started.<br />
<br />
<h3 class="unnumbered" id="email-announcement-french-original">
Email announcement: French original</h3>
<br />
Chères et chers collègues,<br />
<br />
Comme vous avez pu le constater, les accès aux titres Springer 2018 ont été partiellement coupés sur le portail BibCNRS par l’éditeur. Cette interruption concerne l’ensemble des établissements qui ne renouvellent pas leur abonnement en 2018 ou qui n’ont pas encore confirmé le renouvellement, suite à la diffusion de l’offre Springer couvrant la période 2018 à 2020. C’est le cas du CNRS.<br />
<br />
Les négociations de Couperin avec Springer ont abouti il y a quinze jours à une offre de l’éditeur qui comprend, pour la première fois, une baisse continue du coût des abonnements sur 3 ans, concernant toutefois un périmètre légèrement réduit de titres de revues. Le CNRS salue le travail des négociateurs.<br />
<br />
Cependant, le CNRS a décidé de refuser l’offre de Springer et demande à l’éditeur de lui faire une nouvelle proposition. En effet, cette offre de Springer arrive aujourd’hui dans un contexte de transformation du modèle économique de l’édition scientifique, marché qui pèse, en France, près de 100 millions d’euros.<br />
<br />
Depuis plusieurs années, les communautés scientifiques sont engagées dans une démarche visant à rendre accessible les publications scientifiques « au juste prix ». De son côté, la ministre de l’Enseignement Supérieur, de Recherche et de l’Innovation a présenté le 4 juillet dernier un plan national pour la « science ouverte ». Ce plan vise, entre autres, à réduire sensiblement les budgets dédiés aux contrats avec les éditeurs. Le coût des abonnements proposé par Springer reste selon nous trop élevé au regard des objectifs fixés par ce plan.<br />
<br />
Nous avons pleinement conscience des difficultés induites par cette interruption. Mais nous sommes convaincus que seule une position ferme vis a vis des éditeurs peut nous permettre de ramener le coût des publications scientifiques à un niveau raisonnable. C’est d’autant plus important au moment où viennent de s‘engager des négociations difficiles avec Elsevier.Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com1tag:blogger.com,1999:blog-9119793002820072645.post-15828226036664879392018-09-27T14:29:00.000-07:002018-10-23T12:39:27.610-07:00SciPost two years onThere are two reasons why we should care about the journal SciPost Physics: it practices open peer review, by publishing the exchanges between authors and reviewers, and it is free to authors and readers, at a time when academia is struggling to escape the stranglehold of predatory publishers such as Elsevier.<br />
<br />
Two years after it was launched, SciPost Physics is alive on well, with <a href="https://scipost.org/journals/publications?journal=SciPostPhys">155 published articles</a> at the time of this writing. The journal has managed to attract articles of high quality, some of them by well-known physicists such as Cardy, Verlinde, Seiberg, or Rychkov. The main challenges are now to attract many more authors, and sustainable funding. The journal is funded by institutional subsidies, an economic model that works for arXiv but that is rare for journals. If the finances worked out, the journal could ultimately hope to become a megajournal, and publish thousands of articles a year like PLOS One or PeerJ. For the moment, the stated aim of the “<a href="https://scipost.org/ExpSustDrive2018">Expansion and Sustainability Drive</a>” is to publish 500 articles a year at a cost of 200.000 euros. So each published article should cost 400 euros, a very reasonable price for a lean electronic journal: an order of magnitude more than arXiv, an order of magnitude less than Elsevier. (Unlike Elsevier, SciPost does not pay executives, lobbyists, shareholders, salespersons, lawyers, etc.)<br />
<br />
In this post I will comment on SciPost’s workflow and platform, updating my <a href="http://researchpracticesandtools.blogspot.com/2016/10/publishing-in-scipost-must.html">earlier assessment</a> in light of two years’ worth of experience. The aims are to help authors decide whether this journal suits them, to determine which features of SciPost should be emulated (or avoided) by other journals, and to provide feedback to SciPost itself. <br />
<a name='more'></a><br />
<h3 class="unnumbered" id="is-scipost-only-a-journal">
Is SciPost only a journal?</h3>
<h3 class="unnumbered" id="is-scipost-only-a-journal">
</h3>
SciPost’s stated ambition is to be not just a journal, but a portal where articles are publicly discussed. It is possible to comment on any arXiv preprint or journal article, whether it was submitted to SciPost Physics or not. However, very few people take advantage of this: in 2018 there were <a href="https://scipost.org/commentaries/">only 2 such Commentaries so far</a>. Two reasons might plausibly explain why:<br />
<ul>
<li>There is no obvious way for readers of an article to discover that there is a SciPost Commentary on that article. While a post on this blog that mentions an arXiv preprint is linked back from arXiv, a SciPost commentary on that preprint is not.</li>
<li>Commentaries are systematically moderated by SciPost, so it takes time and effort to have them appear online.</li>
</ul>
In order for commentaries to have a chance of becoming popular, they would need to be linked from arXiv. And moderation should be done a posteriori, not a priori, or even replaced with a StackExchange-like quality control mechanism. Also, the author of a commentary should be allowed to easily amend it.<br />
<h3 class="unnumbered" id="reviewers-anonymous-or-not">
</h3>
<h3 class="unnumbered" id="reviewers-anonymous-or-not">
Reviewers: anonymous or not?</h3>
<br />
SciPost reviewers have the choice of remaining anonymous, or publicly disclosing their names alongside their reports. However, quickly browsing the latest 20 published articles, I find that only about 10% of reports are signed. This is not surprising, as writing anonymous reports is business as usual for most researchers. However, PeerJ manages to do a much better job at inciting reviewers to disclose their identities, and <a href="https://peerj.com/benefits/review-history-and-peer-review/">40% of PeerJ reports are signed</a>. SciPost could take the very simple step of having reports signed <a href="https://en.wikipedia.org/wiki/Default_effect">by default</a>, rather than anonymous by default. This could come with a message explaining why signing reports is better in most cases.<br />
<h3 class="unnumbered" id="the-reviewers-tale">
</h3>
<h3 class="unnumbered" id="the-reviewers-tale">
The reviewer’s tale</h3>
<br />
I have <a href="https://scipost.org/submissions/1808.04380v2/">recently experienced SciPost as a reviewer</a>. A few comments on the substance of the process:<br />
<ul>
<li>SciPost requires that I give marks to the article in no fewer that 6 areas: Validity, Significance, Originality, Clarity, Formatting, Grammar. And in each area, there is a wide scale of possible marks, with distinctions that are not clear. (What is the difference between ’High’ and ’Top’?) But I may not have an opinion on all of the artice’s aspects, in particular when it comes to the highly subjective areas of Significance and Originality. Being unable to give meaningful marks, I have given the highest mark in each area. At the very least, giving these marks should be optional, not mandatory. Of course, it would be preferable to have a <a href="http://researchpracticesandtools.blogspot.com/2014/03/rating-scientific-articles-why-and-how.html">more meaningful rating system</a>. And SciPost’s marks are not exploited: I cannot search for articles with top Originality and high Clarity, for example.</li>
<li>As a reviewer, I have to assess my own qualifications: expert, very knowledgeable, knowledgeable, generally qualified, or not qualified. This assessment is not publicly displayed. I do not see the point: the editor should know how qualified I am.</li>
<li>As a reviewer, I have to say whether the article should be published, and in which Tier: Tier I (top 10% in this journal), Tier II (top 50%), or Tier III. But I should have the option of giving my opinion on technical matters only, and leaving editorial judgements to editors.</li>
<li>Reports by invited reviewers have to be vetted before they appear online, and then they cannot be amended by their authors. More flexibility would be welcome.</li>
</ul>
Technical comments on the platform:<br />
<ul>
<li>It is good that I can save my unfinished report, and not have to to complete it in one go. This would be even better if I could save the report without having filled all mandatory fields.</li>
<li>It is good that there is a preview of the report, but the preview curiously does not have line breaks. Also, it is not clear which fields will be shown to the editor only, and which fields will be publicly displayed. It would be better for the preview to accurately reflect what will be publicly displayed. On the other hand, it is not really necessary for the preview to appear side-by-side with the report.</li>
<li>When entering reports, why not allow Latex for enumerations and other formatting, in addition to formulas? Or allow me to enter a report in some standard format, after conversion from Latex using Pandoc.</li>
</ul>
<h3 class="unnumbered" id="miscellaneous-suggestions">
</h3>
<h3 class="unnumbered" id="miscellaneous-suggestions">
Miscellaneous suggestions</h3>
<h3 class="unnumbered" id="miscellaneous-suggestions">
</h3>
<ul>
<li>Better display the history of each submitted article: not just its current status, but the past and future steps in the process, with the dates of the past steps.</li>
<li>Have a Contact email address or web interface for technical queries, and quickly answer such queries. At the moment, the web interface allows the authors and reviewers to write to the Editor in charge only.</li>
<li>Make it easy to print the reviewer reports that appear on the website, possibly by providing PDF versions.</li>
</ul>
<br />
<b>One more suggestion (update on October 23)</b><br />
<br />
When asking potential reviewers to accept or decline an invitation to review, SciPost could emulate JHEP in the following ways:<br />
<ul>
<li>Declining the invitation could be done via a pre-written email to the editor, where the decliner could suggest alternative reviewers, and explain his reasons for declining.</li>
<li>When accepting the invitation, the reviewer could be asked to declare a self-imposed deadline. (No more than 30 days at JHEP.) The reviewer could then receive an automatic reminder email<b> </b>when approaching the deadline. If the report is not submitted at the deadline, the reviewer could be asked to declare a new deadline by another automatic email.</li>
</ul>
<ul>
</ul>
Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com1tag:blogger.com,1999:blog-9119793002820072645.post-45255509398874280532018-07-04T11:48:00.000-07:002018-07-04T11:49:11.450-07:00Directive sur le droit de copie: quelques critiquesLe parlement européen doit prochainement voter un projet de directive sur le droit de copie, qui a fait l'objet de nombreuses critiques. J'en ai parlé il y a une dizaine de jours à Cédric Villani, le député (certes pas européen) de ma circonscription. D'après lui, le gouvernement français pousse pour que la directive soit adoptée. Je lui ai écrit ensuite l'email suivant, qui résume certaines critiques:<br />
<a name='more'></a><br />
<br />
<blockquote class="tr_bq">
Cher Cédric Villani,
<br />
<br />
C'est moi qui ai discuté avec vous ce samedi soir du projet de directive
européenne sur le copyright, et du caractère problématique des articles
3, 11, 13. Je vous écris pour que vous puissiez me contacter si vous
avez des réponses plus complètes à donner à ce sujet. J'en profite pour
apporter quelques précisions:
<br />
<br />
1. Vous m'avez demandé quelles startups pourraient être entravées par la
directive. Un exemple: le logiciel libre ContentMine (fait à Cambridge,
Angleterre) permet en principe de faire des découvertes scientifiques et
techniques en 'minant' la littérature. Le cadre législatif actuel
entrave son utilisation, et l'article 3 aggravera le problème, selon son
créateur Peter Murray-Rust. Les découvertes potentielles seront faites
soit aux États-Unis, soit même dans des pays qui se permettent d'ignorer
toute protection du copyright.
<br />
<br />
2. La raison d'être de l'article 13 est de protéger les revenus des
ayant droit. Mais pour cela l'article 13 met en place des systèmes de
censure automatisés, qui menacent la liberté d'expression. Gagner de
l'argent grâce au copyright n'est cependant pas une liberté
fondamentale. La propriété intellectuelle existe pour stimuler la
création artistique et le progrès technique et scientifique, mais sa
surprotection produit l'effet inverse. (Que le copyright persiste 70 ans
après la mort de l'auteur ne stimule pas l'auteur, mais gêne les
enseignants, Wikipedia, les générations suivantes d'auteurs, etc.)
<br />
<br />
3. Un exemple de problème créé par l'article 13: un chercheur publie un
article dans un journal. Il réutilise du texte et/ou des figures et/ou
des formules dans un nouveau texte. (Notes de cours, article de
synthèse, nouvel article...) Cette réutilisation est peut-être illégale:
cela dépend des cas, en pratique c'est impossible à savoir sans effort
disproportionné. Si maintenant le chercheur essaie de mettre ce nouveau
texte en ligne sur une plateforme (arXiv, site personnel, blog, site
d'une conférence, autre journal...), le système de censure automatisé de
la plateforme l'en empêchera, y compris probablement dans les cas où
c'est légal.
<br />
<br />
Certains problèmes peuvent être atténués par des exceptions. Mais le
régime d'exceptions empêche les activités futures auxquelles on n'aurait
pas pensé au moment de formuler les exceptions. De plus la complexité
des règles est dissuasive pour les acteurs trop petits pour embaucher
des avocats spécialisés et accepter l'éventualité d'un procès. Enfin,
confier des règles si compliquées à un système de censure automatisé
garantit qu'on censurera des activités légales.
<br />
<br />
Bien cordialement,</blockquote>
Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-92073256493862736732018-05-29T14:37:00.000-07:002018-05-29T14:37:31.498-07:00Where should academics write if they want to be read?Out of the 2 million academic articles that are published each year, many are not read by anyone but their authors, and most have no more than a handful of readers. For someone who writes in order to spread ideas, and does not publish just to avoid perishing, this can be quite discouraging. Of course, not everything one has to say is of interest to many people. Still, part of the problem could come from the academic article as a venue, and one may wonder whether other writing venues (such as a blog) could reach more readers.<br />
<br />
In this post I will list various venues for scientific writing, and try to do an order-of-magnitude comparison between them. I will not simply estimate how many readers are reached: a tweet may reach many people, but it is read in seconds and can be quickly forgotten. Rather, I will try to estimate the ratio between the time spent by all readers on a text, and the time spent writing it.<br />
<a name='more'></a><br />
<br />
Like any quantitative metric of this type, this “time ratio” comes with potential problems with its estimation and interpretation. I will deal with some of these problems in the next paragraph, and argue that the time ratio is meaningful for comparing venues at the order-of-magnitude level. Readers who already accept that can skip this paragraph.<br />
<h4 id="the-fine-print">
</h4>
<h4 id="the-fine-print">
The fine print</h4>
<h4 id="the-fine-print">
</h4>
Here are the admittedly important caveats with the time ratio.<br />
<ol>
<li><i>The ratio only deals with the writing itself, whereas much work is needed before having something interesting to write.</i> The purpose of the exercise is to compare the efficiency of various kinds of writing.</li>
<li><i>What about oral communication, videos, etc?</i> Would be an interesting generalization.</li>
<li><i>You favour unclear texts that make readers sweat longer.</i> At the level of venues rather than individual texts, the effect of clarity should average out.</li>
<li><i>Reading time is a poor proxy for the influence of ideas.</i> I want to measure the efficiency of writing venues, not the influence of ideas.</li>
<li><i>What about readers who read only part of the text?</i> This is taken into account when estimating reading time.</li>
<li><i>The time ratio cannot be defined or measured precisely.</i> Good! Then there is less potential for abuse than with the h-index or impact factor.</li>
</ol>
<h4 id="the-formulas">
The formulas</h4>
<h4 id="the-formulas">
</h4>
We can estimate the time ratio at the level of individual texts, or at the level of individual people. The time ratio for a text is <br />
<span class="math display">$$R_\text{text} = N_\text{readers} \times \frac{T_\text{reading}}{T_\text{writing}}$$</span> The time ratio for a person is the time spent as a reader in a venue, divided by the time spent as a writer: <br />
<span class="math display">$$R_\text{person} = \frac{T_\text{reader}}{T_\text{writer}}$$</span> For a given venue, these two definitions of the time ratio should agree on average. I will use the text-level definition as the primary computing tool, and the person-level definition as a sanity check. I will present the numbers for a given venue as follows:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-TpZmqvGCTyw/Ww2j_p1xhfI/AAAAAAAACJI/mbvZ-3AnRkc_70pM0MuU87AavmtH6SYfgCLcBGAs/s1600/blog-figure0-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="94" data-original-width="661" height="45" src="https://3.bp.blogspot.com/-TpZmqvGCTyw/Ww2j_p1xhfI/AAAAAAAACJI/mbvZ-3AnRkc_70pM0MuU87AavmtH6SYfgCLcBGAs/s320/blog-figure0-1.png" width="320" /></a></div>
<br />The length of the white rectangle is proportional to the logarithm of <span class="math inline"><i>R</i><sub>text</sub></span>. If the ratio is less than one, this logarithm is negative, and the blue rectangle for <span class="math inline"><i>T</i><sub>writing</sub></span> actually spills to the left of the red rectangle. All these numbers are estimated up to factors of <span class="math inline">3</span>. Times are given in hours. So here are the results:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-UowgpAW3uwU/Ww2kFzsZPlI/AAAAAAAACJM/536-_fFBP3sfT_0Sj9kcn6eo4vO5Gdz5QCLcBGAs/s1600/blog-figure1-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="673" data-original-width="720" height="299" src="https://1.bp.blogspot.com/-UowgpAW3uwU/Ww2kFzsZPlI/AAAAAAAACJM/536-_fFBP3sfT_0Sj9kcn6eo4vO5Gdz5QCLcBGAs/s320/blog-figure1-1.png" width="320" /></a></div>
<br />
<h4 id="where-do-these-numbers-come-from">
Where do these numbers come from?</h4>
<ul>
<li><b>Article:</b> A typical scientific article. I estimate that an article is written in <span class="math inline"><i>T</i><sub>writing</sub> = 100</span> hours and read in <span class="math inline"><i>T</i><sub>reading</sub> = 1</span> hour. These numbers can vary a lot between articles, what matters is their ratio. By reader I mean someone who goes further than the abstract. The number of readers per paper <span class="math inline"><i>N</i><sub>readers</sub> = 30</span> can be estimated in two ways: as how many papers a scientist reads for each paper he writes (not forgetting that having coauthors means writing fractions of papers), or as the number of citations per paper. (Papers are often cited without being read, or read without being cited, I assume that these effects cancel out.) The resulting time ratio <span class="math inline"><i>R</i> = 0.3</span> suggests that the average scientists spends more (but not much more) time writing than reading articles, which sounds right. </li>
<li><b>Report:</b> A confidential reviewer report for a journal. The time <span class="math inline"><i>T</i><sub>writing</sub> = 3</span> hours is for the writing per se, not the full time spent on the reviewed article. The reading time <span class="math inline"><i>T</i><sub>reading</sub> = 0.3</span> is quite short because the report is <i>targeted communication</i>, whose readers are is an ideal position to quickly absorb the information. Therefore, the ratio <span class="math inline"><i>R</i> = 0.3</span> may somewhat understimate the efficiency of reports.</li>
<li><b>Email:</b> An email to other scientists that asks for technical explanations, or answers such a request. I assume that such emails are typically addressed to a few people (collaborators, authors of an article, etc), hence <span class="math inline"><i>N</i><sub>readers</sub> = 3</span>. The result <span class="math inline"><i>R</i> = 0.3</span> suggests that we spend more time writing than reading emails, which sounds right. Again, this is targeted communication.</li>
<li><b>Wikipedia:</b> A medium-size Wikipedia article on a technical and relatively obscure subject that was not previously well-covered. For example, <a href="https://en.wikipedia.org/wiki/Two-dimensional_conformal_field_theory">this article</a> that I wrote. Writing an acceptable Wikipedia article takes quite some time, of the order of <span class="math inline"><i>T</i><sub>writing</sub> = 30</span> hours. And it will typically be read quickly, let’s say in <span class="math inline"><i>T</i><sub>reading</sub> = 0.1</span> hours. But the number of readers is large: <a href="https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org&platform=all-access&agent=user&range=latest-20&pages=Two-dimensional_conformal_field_theory">pageview statistics</a> suggest a few dozen readers per day, so possibly <span class="math inline"><i>N</i><sub>readers</sub> = 10000</span> readers over a lifetime of one year or more before the text undergoes large changes. There are large uncertainties in all these numbers, but the result <span class="math inline"><i>R</i> = 30</span> is consistent with the idea that most scientists spend some time reading Wikipedia, and very little time (if any) writing in it.</li>
<li><b>Blog:</b> A blog post on a technical subject, in a personal blog by a typical researcher. In my experience, this is quite comparable to a reviewer report: after reading an interesting article in some detail I can blog about it, write a report if invited to do so, or both. The blog post will have many more readers than the confidential report, my blog’s counter suggests <span class="math inline"><i>N</i><sub>readers</sub> = 30</span> unique readers for such texts. I expect that the average reading time <span class="math inline"><i>T</i><sub>reading</sub> = 0.1</span> hours is quite short, nevertheless the ratio <span class="math inline"><i>R</i> = 1</span> is better than for a report.</li>
<li><b>Answer:</b> A substantial question or answer on a StackExchange-like website. <span class="math inline"><i>N</i><sub>readers</sub> = 30</span> is estimated as a reasonable multiple of the typical number <span class="math inline">3 − 10</span> of upvotes. An answer is read quickly, and also written rather quickly, assuming that we choose to answer a question only if we can do so quite easily. So <span class="math inline"><i>T</i><sub>writing</sub> = 0.3</span> is better than for an email, because with emails we have less freedom to choose to answer or not. The ratio <span class="math inline"><i>R</i> = 3</span> suggests that we spend more time reading answers than writing them.</li>
</ul>
<h4 id="interpretation-traditional-and-emerging-venues">
Interpretation: traditional and emerging venues</h4>
<h4 id="interpretation-traditional-and-emerging-venues">
</h4>
Articles, reports and emails have comparable time ratios. I would call them traditional venues, because they are routinely used by most working scientists. These venues compete with one another for the attention of scientists as readers and writers, and this competition may have reached an equilibrium state, which may explain why the ratios are comparable.<br />
<br />
Wikipedia, blogs and answers could be called emerging venues: writing in these venues is not part of the average scientist’s normal workflow, and does not formally count in her career. Reading Wikipedia and answers is however probably becoming the norm, and this imbalance between writing and reading may help explain the large time ratios of these venues.<br />
<br />
Let me try to elaborate by making a pairwise comparison between traditional and emerging venues:<br />
<br />
<table><thead>
<tr class="header"><th align="center">Writing time</th><th align="center"> Traditional</th><th align="center"> Emerging</th></tr>
</thead><tbody>
<tr class="odd"><td align="center">High</td><td align="center">Article</td><td align="center">Wikipedia</td></tr>
<tr class="even"><td align="center">Medium</td><td align="center">Report</td><td align="center">Blog</td></tr>
<tr class="odd"><td align="center">Low</td><td align="center">Email</td><td align="center">Answer</td></tr>
</tbody></table>
<br />
Blog posts and answers could be considered as public versions of private communication in reports and emails respectively. In these cases, the emerging venues might advantageously replace the traditional venues. It would be particularly advantageous to replace confidential reviewer reports with publicly available commentaries, whose authors would be known and could therefore receive credit, and which could in turn be publicly debated.<br />
<br />
It is however more far-fetched to imagine replacing articles by something like Wikipedia. To begin with, Wikipedia is in principle not for primary research, and not even for works of synthesis, but for compilations of statements from existing sources. Furthermore, articles’ texts are already publicly available (if we consider the problem of open access as <a href="https://en.wikipedia.org/wiki/Sci-Hub">solved</a>), and credit is harder to attribute in Wikipedia. This may explain why so few academics write in Wikipedia, and why Wikipedia’s time ratio is so large. Such a large ratio however implies that if they want to be read as much as possible, academics should write in Wikipedia.Sylvain Ribaulthttp://www.blogger.com/profile/01458212114354400137noreply@blogger.com0