tag:blogger.com,1999:blog-91197930028200726452018-06-22T01:45:28.806-07:00Research Practices and ToolsSylvain Ribaultnoreply@blogger.comBlogger52125tag:blogger.com,1999:blog-9119793002820072645.post-92073256493862736732018-05-29T14:37:00.000-07:002018-05-29T14:37:31.498-07:00Where should academics write if they want to be read?Out of the 2 million academic articles that are published each year, many are not read by anyone but their authors, and most have no more than a handful of readers. For someone who writes in order to spread ideas, and does not publish just to avoid perishing, this can be quite discouraging. Of course, not everything one has to say is of interest to many people. Still, part of the problem could come from the academic article as a venue, and one may wonder whether other writing venues (such as a blog) could reach more readers.<br /><br />In this post I will list various venues for scientific writing, and try to do an order-of-magnitude comparison between them. I will not simply estimate how many readers are reached: a tweet may reach many people, but it is read in seconds and can be quickly forgotten. Rather, I will try to estimate the ratio between the time spent by all readers on a text, and the time spent writing it.<br /><a name='more'></a><br /><br />Like any quantitative metric of this type, this “time ratio” comes with potential problems with its estimation and interpretation. I will deal with some of these problems in the next paragraph, and argue that the time ratio is meaningful for comparing venues at the order-of-magnitude level. Readers who already accept that can skip this paragraph.<br /><h4 id="the-fine-print"> </h4><h4 id="the-fine-print">The fine print</h4><h4 id="the-fine-print"> </h4>Here are the admittedly important caveats with the time ratio.<br /><ol><li><i>The ratio only deals with the writing itself, whereas much work is needed before having something interesting to write.</i> The purpose of the exercise is to compare the efficiency of various kinds of writing.</li><li><i>What about oral communication, videos, etc?</i> Would be an interesting generalization.</li><li><i>You favour unclear texts that make readers sweat longer.</i> At the level of venues rather than individual texts, the effect of clarity should average out.</li><li><i>Reading time is a poor proxy for the influence of ideas.</i> I want to measure the efficiency of writing venues, not the influence of ideas.</li><li><i>What about readers who read only part of the text?</i> This is taken into account when estimating reading time.</li><li><i>The time ratio cannot be defined or measured precisely.</i> Good! Then there is less potential for abuse than with the h-index or impact factor.</li></ol><h4 id="the-formulas">The formulas</h4><h4 id="the-formulas"> </h4>We can estimate the time ratio at the level of individual texts, or at the level of individual people. The time ratio for a text is <br /><span class="math display">$$R_\text{text} = N_\text{readers} \times \frac{T_\text{reading}}{T_\text{writing}}$$</span> The time ratio for a person is the time spent as a reader in a venue, divided by the time spent as a writer: <br /><span class="math display">$$R_\text{person} = \frac{T_\text{reader}}{T_\text{writer}}$$</span> For a given venue, these two definitions of the time ratio should agree on average. I will use the text-level definition as the primary computing tool, and the person-level definition as a sanity check. I will present the numbers for a given venue as follows:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-TpZmqvGCTyw/Ww2j_p1xhfI/AAAAAAAACJI/mbvZ-3AnRkc_70pM0MuU87AavmtH6SYfgCLcBGAs/s1600/blog-figure0-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="94" data-original-width="661" height="45" src="https://3.bp.blogspot.com/-TpZmqvGCTyw/Ww2j_p1xhfI/AAAAAAAACJI/mbvZ-3AnRkc_70pM0MuU87AavmtH6SYfgCLcBGAs/s320/blog-figure0-1.png" width="320" /></a></div><br />The length of the white rectangle is proportional to the logarithm of <span class="math inline"><i>R</i><sub>text</sub></span>. If the ratio is less than one, this logarithm is negative, and the blue rectangle for <span class="math inline"><i>T</i><sub>writing</sub></span> actually spills to the left of the red rectangle. All these numbers are estimated up to factors of <span class="math inline">3</span>. Times are given in hours. So here are the results:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-UowgpAW3uwU/Ww2kFzsZPlI/AAAAAAAACJM/536-_fFBP3sfT_0Sj9kcn6eo4vO5Gdz5QCLcBGAs/s1600/blog-figure1-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="673" data-original-width="720" height="299" src="https://1.bp.blogspot.com/-UowgpAW3uwU/Ww2kFzsZPlI/AAAAAAAACJM/536-_fFBP3sfT_0Sj9kcn6eo4vO5Gdz5QCLcBGAs/s320/blog-figure1-1.png" width="320" /></a></div><br /><h4 id="where-do-these-numbers-come-from">Where do these numbers come from?</h4><ul><li><b>Article:</b> A typical scientific article. I estimate that an article is written in <span class="math inline"><i>T</i><sub>writing</sub> = 100</span> hours and read in <span class="math inline"><i>T</i><sub>reading</sub> = 1</span> hour. These numbers can vary a lot between articles, what matters is their ratio. By reader I mean someone who goes further than the abstract. The number of readers per paper <span class="math inline"><i>N</i><sub>readers</sub> = 30</span> can be estimated in two ways: as how many papers a scientist reads for each paper he writes (not forgetting that having coauthors means writing fractions of papers), or as the number of citations per paper. (Papers are often cited without being read, or read without being cited, I assume that these effects cancel out.) The resulting time ratio <span class="math inline"><i>R</i> = 0.3</span> suggests that the average scientists spends more (but not much more) time writing than reading articles, which sounds right. </li><li><b>Report:</b> A confidential reviewer report for a journal. The time <span class="math inline"><i>T</i><sub>writing</sub> = 3</span> hours is for the writing per se, not the full time spent on the reviewed article. The reading time <span class="math inline"><i>T</i><sub>reading</sub> = 0.3</span> is quite short because the report is <i>targeted communication</i>, whose readers are is an ideal position to quickly absorb the information. Therefore, the ratio <span class="math inline"><i>R</i> = 0.3</span> may somewhat understimate the efficiency of reports.</li><li><b>Email:</b> An email to other scientists that asks for technical explanations, or answers such a request. I assume that such emails are typically addressed to a few people (collaborators, authors of an article, etc), hence <span class="math inline"><i>N</i><sub>readers</sub> = 3</span>. The result <span class="math inline"><i>R</i> = 0.3</span> suggests that we spend more time writing than reading emails, which sounds right. Again, this is targeted communication.</li><li><b>Wikipedia:</b> A medium-size Wikipedia article on a technical and relatively obscure subject that was not previously well-covered. For example, <a href="https://en.wikipedia.org/wiki/Two-dimensional_conformal_field_theory">this article</a> that I wrote. Writing an acceptable Wikipedia article takes quite some time, of the order of <span class="math inline"><i>T</i><sub>writing</sub> = 30</span> hours. And it will typically be read quickly, let’s say in <span class="math inline"><i>T</i><sub>reading</sub> = 0.1</span> hours. But the number of readers is large: <a href="https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org&platform=all-access&agent=user&range=latest-20&pages=Two-dimensional_conformal_field_theory">pageview statistics</a> suggest a few dozen readers per day, so possibly <span class="math inline"><i>N</i><sub>readers</sub> = 10000</span> readers over a lifetime of one year or more before the text undergoes large changes. There are large uncertainties in all these numbers, but the result <span class="math inline"><i>R</i> = 30</span> is consistent with the idea that most scientists spend some time reading Wikipedia, and very little time (if any) writing in it.</li><li><b>Blog:</b> A blog post on a technical subject, in a personal blog by a typical researcher. In my experience, this is quite comparable to a reviewer report: after reading an interesting article in some detail I can blog about it, write a report if invited to do so, or both. The blog post will have many more readers than the confidential report, my blog’s counter suggests <span class="math inline"><i>N</i><sub>readers</sub> = 30</span> unique readers for such texts. I expect that the average reading time <span class="math inline"><i>T</i><sub>reading</sub> = 0.1</span> hours is quite short, nevertheless the ratio <span class="math inline"><i>R</i> = 1</span> is better than for a report.</li><li><b>Answer:</b> A substantial question or answer on a StackExchange-like website. <span class="math inline"><i>N</i><sub>readers</sub> = 30</span> is estimated as a reasonable multiple of the typical number <span class="math inline">3 − 10</span> of upvotes. An answer is read quickly, and also written rather quickly, assuming that we choose to answer a question only if we can do so quite easily. So <span class="math inline"><i>T</i><sub>writing</sub> = 0.3</span> is better than for an email, because with emails we have less freedom to choose to answer or not. The ratio <span class="math inline"><i>R</i> = 3</span> suggests that we spend more time reading answers than writing them.</li></ul><h4 id="interpretation-traditional-and-emerging-venues">Interpretation: traditional and emerging venues</h4><h4 id="interpretation-traditional-and-emerging-venues"> </h4>Articles, reports and emails have comparable time ratios. I would call them traditional venues, because they are routinely used by most working scientists. These venues compete with one another for the attention of scientists as readers and writers, and this competition may have reached an equilibrium state, which may explain why the ratios are comparable.<br /><br />Wikipedia, blogs and answers could be called emerging venues: writing in these venues is not part of the average scientist’s normal workflow, and does not formally count in her career. Reading Wikipedia and answers is however probably becoming the norm, and this imbalance between writing and reading may help explain the large time ratios of these venues.<br /><br />Let me try to elaborate by making a pairwise comparison between traditional and emerging venues:<br /><br /><table><thead><tr class="header"><th align="center">Writing time</th><th align="center"> Traditional</th><th align="center"> Emerging</th></tr></thead><tbody><tr class="odd"><td align="center">High</td><td align="center">Article</td><td align="center">Wikipedia</td></tr><tr class="even"><td align="center">Medium</td><td align="center">Report</td><td align="center">Blog</td></tr><tr class="odd"><td align="center">Low</td><td align="center">Email</td><td align="center">Answer</td></tr></tbody></table><br />Blog posts and answers could be considered as public versions of private communication in reports and emails respectively. In these cases, the emerging venues might advantageously replace the traditional venues. It would be particularly advantageous to replace confidential reviewer reports with publicly available commentaries, whose authors would be known and could therefore receive credit, and which could in turn be publicly debated.<br /><br />It is however more far-fetched to imagine replacing articles by something like Wikipedia. To begin with, Wikipedia is in principle not for primary research, and not even for works of synthesis, but for compilations of statements from existing sources. Furthermore, articles’ texts are already publicly available (if we consider the problem of open access as <a href="https://en.wikipedia.org/wiki/Sci-Hub">solved</a>), and credit is harder to attribute in Wikipedia. This may explain why so few academics write in Wikipedia, and why Wikipedia’s time ratio is so large. Such a large ratio however implies that if they want to be read as much as possible, academics should write in Wikipedia.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-17560023650022479822018-03-29T14:49:00.000-07:002018-03-30T10:48:35.743-07:00Write that printing money creates wealth, get published in NatureIn the debate about the carbon footprint of cryptocurrencies, Nature has recently published a one-page correspondence titled “<a href="https://www.nature.com/articles/d41586-018-03391-2">Cryptocurrency mining is neither wasteful nor uneconomic</a>”. This counter-intuitive claim provoked me to read the text in search of a non-trivial idea. To my shock and horror, the only basis for the claim is the trivial wordplay of using the same term “wealth” for both <i>money</i> and <i>goods and services</i>. <br /><a name='more'></a>The idea is that creating a given amount in cryptocurrency consumes less resources than producing the goods and services that can be bought with that amount, so cryptocurrency mining is more economic than other activities. By the same argument, printing paper money generates wealth so long the paper is worth less than the amount written on it.<br /><br />I won’t blame the author of this nonsense: everyone can have stupid ideas from time to time. Who knows? This might even be a hoax. But Nature’s editorial staff is supposed to act as a filter, and not to publish nonsense. In selecting research articles for publication, Nature is already <a href="http://researchpracticesandtools.blogspot.fr/2018/01/germany-wont-pay-for-natures-scientific.html">being criticized</a> for priviledging striking claims over solid results. It would be worrying that Nature’s news and comments sections would start sliding down a similar slope.<br /><br />The correspondence in question unwittingly reinforces the argument against cryptocurrency mining: while obviously wasteful, that activity is encouraged in the existing economic system, and in this particular sense it is indeed not uneconomic. When money was based on precious metals, producing money did take effort and resources. But in paper and electronic forms, money has become almost free to produce. Consuming much resources to produce cryptocurrencies is therefore a regression, and a needless one: there already exist cryptocurrencies that do not require mining.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-3343979427493857832018-03-26T12:32:00.000-07:002018-04-11T01:57:02.820-07:00Springer threatens to go rogue, and retreatsIn its <a href="http://researchpracticesandtools.blogspot.fr/2018/02/couperin-vs-springer-and-elsevier.html">negotiations with Springer</a>, the consortium Couperin that was in charge of most French researchers’ subscriptions had been modestly asking that Springer renounce “double dipping”, i.e. does not get paid twice for the same articles – once via subscriptions, once via open access APCs. But this would have meant decreasing subscription prices by 15%, while Springer has been used to yearly increases of the order of 3-5%.<br /><br />Today’s news are that negotiations have broken down, and most French researchers are set to lose legal access to most articles published by Springer on April 1st. (See CNRS’s <a href="https://drive.google.com/file/d/1Tx475Wb1y5GdoVcFLFRnCpz7NnimLxQi/view?usp=sharing">note on the subject</a>, in French.) In such a conflict, researchers can stand up against the publisher by<br /><ul><li>not complaining when they lose access to journals, and getting the articles elsewhere (which nowadays mostly means at Sci-Hub),</li><li><a href="http://researchpracticesandtools.blogspot.fr/2017/12/after-elsevier-should-we-boycott.html">boycotting the publisher</a>, i.e. not submitting articles to its journals, and not working for them as a referee or editor.</li></ul>Coincidentally, I have received an invitation to referee an article for JHEP, a journal published by Springer. I have declined as follows:<br /><blockquote>Most French research institutions are set to lose access to Springer journals on April 1st. I am aware that this does not affect access to JHEP, which is covered by SCOAP3, and that in our field all articles are on arXiv anyway. However, as a matter of principle, I would like to protest Springer’s extortionate commercial practices, and to show solidarity with colleagues who will lose access to their own work. Therefore, I am suspending all new collaboration with JHEP, and I am declining to review this article.</blockquote><br /><b>Update on April 3rd: </b>According to an email from Couperin, access to Springer journals has in fact not been cut off. It seems that Couperin got away with rejecting Springer's latest offer, and saying that they were willing to continue negotiating. So Springer was bluffing, and does not dare take a hard line against Couperin. If such a weak and ill-prepared consortium, with little support from researchers (who else is boycotting Springer?) can defeat Springer, pretty much anyone can.<br /><br /><b>Update on April 11th: </b>There is now a petition called "<a href="https://www.change.org/p/springer-nous-pouvons-nous-en-passer-springer-we-can-do-without">Springer, we can do without</a>". Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-4578041259110804992018-03-20T15:17:00.000-07:002018-03-26T14:52:43.870-07:00The open secrets of life with arXivIf you only think of arXiv as a tool for making articles openly accessible, consider this: in 2007, a <a href="https://arxiv.org/abs/0712.1037">study</a> <a href="https://scholarlykitchen.sspnet.org/2008/08/07/the-importance-of-being-first/">showed</a> that papers appearing near the top of the daily listing of new papers on arXiv, will eventually be more cited than papers further down the list – about two times more cited. And there is a daily scramble for submitting papers as soon as possible after the 14:00 EDT deadline, in order to appear as high as possible on the listing. The effect is not as perverse as it seems, as there is no strong causal relation between appearing near the top and getting more citations. (More likely, better papers are higher in the listing because their authors want to advertise them.)<br /><br />The consequences of arXiv’s systematic use in some communities are actually so deep that a speciation event has occurred among researchers, and a new species of <b>arXivers</b> has appeared. Here I will try to explain how arXivers live, in order to help non-arXivers understand arXivers, and have an idea of what could happen to them if the currently proliferating clones of arXiv gained widespread use.<br /><a name='more'></a><br />Let me define an arXiver as a researcher who puts all her articles on arXiv, and whose colleagues do the same. It may seem that there are no arXivers, since in any given discipline (even physics) only <a href="https://arxiv.org/abs/1306.3261">a minority of articles are on arXiv</a>. However, speciation did not happen discipline by discipline: rather, there are a number of subfields where close to 100% of articles are on arXiv, and whose researchers are therefore arXivers. Historically, the first such subfield was theoretical high-energy physics.<br /><br />An arXiver’s working day almost invariably starts with checking the new papers on arXiv. Since all relevant new papers are there, this has become the sole and only method for arXivers to keep up to date with the literature. When they write papers, arXivers send them to arXiv first, and later (if at all) to a peer-reviewed journal. The consequences are profound and manifold:<br /><ol><li>A paper on arXiv counts as a claim of priority: once your results are on arXiv, <b>you can no longer get scooped</b>.</li><li>A paper that is not on arXiv will never get read or cited. So there is a tipping point in the adoption of arXiv in a research community, after which <b>using arXiv becomes mandatory</b> – unless you do not want to be read. (An arXiver may write a junk paper for inflating his publication list, and publish it in an obscure journal.)</li><li>A paper that catches no attention the day it appears on arXiv, may never get read. Catching attention does not necessarily mean being read immediately: it may mean being printed or saved or otherwise marked for future reading. And not all readers will be caught when the paper appears: some readers may attract further readers by citing or recommending the paper. But <b>to catch some attention when appearing is vital</b>, unless one counts on second-order effects such as advertising the paper via talks, or having one or two journal referees read it.</li><li>These were pretty straightforward deductions, right? Now we reach a first nontrivial consequence: <b>submitting to arXiv is more demanding than submitting to a journal</b>, and texts submitted <i>by arXivers to arXiv</i> are of higher quality than texts submitted <i>by non-arXivers to journals</i>. An arXiver indeed stakes her paper’s future, and part of her own reputation, on the first arXiv version. (Of course, there are always cases where people initially submit rough drafts, whether to arXiv or to journals.)</li><li><b>ArXivers do not follow what journals publish</b>. However, they may end up reading journal versions of papers if those versions are put on arXiv, or via citation lists that link to journals rather than to arXiv.</li><li>ArXivers do not always stop improving a paper after it is published in a journal. Moreover, they may disagree with some of the changes that are requested by the journal, and keep their preferred version on arXiv. So <b>the arXiv version can be better than the journal version</b>. This happens in particular when the journal in question has <a href="http://researchpracticesandtools.blogspot.fr/2016/10/physical-review-letters-physics-luxury.html#more">pointless formatting constraints and length limitations</a>.</li><li><b>ArXivers do publish in journals, but mostly for the sake of their careers</b>, rather than for the peer review process. They are forced to do it by established bureaucratic and bibliometric procedures, even though the resulting improvements will often benefit few readers. Some established researchers can afford to publish <a href="http://inspirehep.net/author/profile/C.Vafa.1">some of their papers on arXiv only</a>, but having junior coauthors still forces them to send some other papers to journals. Nevertheless, it is now possible to <a href="https://en.wikipedia.org/wiki/Grigori_Perelman">earn a Fields medal</a> based on arXiv papers.</li><li><b>The quality of peer review in journals that are frequented by arXivers has declined</b> and is often poor. OK, there are no data that I know of for backing this claim, and even without arXiv, the proliferation of papers makes peer review decline. Still, this is an inevitable consequence of points 4, 5 and 7. Official peer review by journals is increasingly irrelevant to arXivers, but peer review is still practiced unofficially in the forms of readers’ feedback, or mentions in later papers. And some papers are occasionnally <a href="https://arxiv.org/abs/1712.09387">debated on arXiv</a>.</li><li><b>The disappearance of journals would hardly affect arXivers’ research work</b>. A fortiori, arXivers do not need access to journals. (<a href="https://scoap3.org/">SCOAP3</a> was a waste of public money even before Sci-Hub made journal subscriptions universally irrelevant.) Journals are important to arXivers only insofar as they are administratively mandated for purposes such as managing careers.</li></ol>The last point contradicts the dogma that organized peer review is necessary for scientific research. If effect, arXivers have been running a vast experiment for decades, which consisted in marginalizing the substance of the journals’ work. This may not have been widely noticed, because their journals have continued running, even though they have been taken less and less seriously by readers, authors and reviewers. Of course, the dogma of the necessity of peer review is itself relatively recent, and <a href="http://theconversation.com/hate-the-peer-review-process-einstein-did-too-27405">did not hold sway a few decades ago</a>.<br /><br />To conclude, although arXiv has very basic functionalities that did not change much since 1991, its actual role and influence much exceed providing easy access to papers. ArXiv plays such a fundamental role in arXivers' scientific lives because <b>all</b> papers that arXivers read appear there <b>first</b>. There should be a name for this type of open access, which allies no costs to authors and readers, complete coverage of new articles, and priority over other ways of distributing articles.<br /><br /><b>A word of caution: </b>Many researchers are transitional forms between the arXivers that I described, and the traditional scientists for whom journals are essential. For example, mathematicians are heavy users of arXiv, but journals still play an important role for them, and their peer review process can be extremely rigorous and valuable. Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-41568172399689008752018-03-01T02:58:00.000-08:002018-03-13T02:35:22.130-07:00Uniqueness of Liouville theoryThe original definition of Liouville theory by Polyakov in the 1980s was written in terms of a Lagrangian, motivated by applications to two-dimensional quantum gravity. In the 1990s however, Liouville theory was reformulated and solved in the conformal bootstrap approach. In this approach, the theory is characterized by a number of assumptions, starting with conformal symmetry. In order to actually define the theory, the assumptions have to be restrictive enough for singling out a unique consistent theory.<br /><br />After assuming conformal symmetry, it is natural to make assumptions on the theory’s spectrum, i.e. its space of states. For any complex value of the central charge <span class="math">\(c\)</span>, the spectrum of Liouville theory is<br /><span class="math">\[\mathcal{S} = \int_{\frac{c-1}{24}}^{\frac{c-1}{24}+\infty} d\Delta\ \mathcal{V}_\Delta \otimes \bar{\mathcal{V}}_\Delta\ ,\]</span><br /><a name='more'></a><br />where <span class="math">\(\mathcal{V}_\Delta\)</span> is a Verma module (with conformal dimension <span class="math">\(\Delta\)</span>) of the left-moving Virasoro algebra, and <span class="math">\(\bar{\mathcal{V}}_\Delta\)</span> is the same Verma module of the right-moving Virasoro algebra. Some features of this spectrum are:<br /><ul><li>It is continuous, with no discrete terms.</li><li>Each representation appears with multiplicity one.</li><li>It is diagonal, i.e. a given left-moving representation is always paired with the same right-moving representation.</li><li>For <span class="math">\(c>1\)</span>, the spectrum is unitary, i.e. each Verma module has a positive definite bilinear form such that the Virasoro generator <span class="math">\(L_0\)</span> is self-adjoint.</li></ul>The question is whether Liouville theory is the unique consistent theory with this spectrum, and whether other theories can be obtained by relaxing some assumptions on the spectrum. Showing that Liouville theory is unique amounts to constraining its three-point structure constant, since all correlation functions on the sphere are determined by the spectrum and three-point structure constant. Before discussing this, let me review some circumstances in which the existence and/or uniqueness of Liouville theory are known to break down:<br /><ul><li><b>Higher multiplicities:</b> Allowing each representation to have a finite multiplicity leads to theories that may be called <span class="math">\(N\times\)</span> Liouville theory with <span class="math">\(N\geq 2\)</span>. Then each representation is characterized not only by its conformal dimension, but also by a multiplicity index that can take <span class="math">\(N\)</span> values.</li><li><b>Discrete terms:</b> Correlation functions of Liouville theory depend analytically on the fields’ conformal dimensions. So these dimensions can be analytically continued outside the spectrum without much happening. More precisely, the operator product of two fields of dimensions <span class="math">\(\Delta_1\)</span> and <span class="math">\(\Delta_2\)</span> is a linear combinations of fields with dimensions <span class="math">\(\Delta \in (\frac{c-1}{24},\frac{c-1}{24}+\infty)\)</span>, provided <span class="math">\(\Delta_1\)</span> and <span class="math">\(\Delta_2\)</span> are close enough to this same interval. However, discrete terms have to be included if <span class="math">\(\Delta_1\)</span> and <span class="math">\(\Delta_2\)</span> go too far, in particular if <span class="math">\(c>1\)</span> and <span class="math">\(\Delta_1=\Delta_2<\frac{c-1}{32}\)</span>. (See Exercise 3.5 of <a href="https://arxiv.org/abs/1406.4290">my review article</a> for more details.)</li><li><b>Non-analytic structure constants:</b> For some values of the central charge, the assumption that structure constants are analytic in the conformal dimensions is known to be necessary for Liouville theory to be unique. Namely, for <span class="math">\(c=1-6\frac{(p-q)^2}{pq}\)</span> for <span class="math">\(p,q\)</span> positive integers (i.e. a central charge where a minimal model exists), there exists an alternative consistent theory with the same spectrum as Liouville theory, but non-analytic structure constants. A theory of this type was first found by <a href="https://arxiv.org/abs/hep-th/0107118">Runkel and Watts</a> at <span class="math">\(c=1\)</span>. The generalization to other rational central charges was <a href="https://arxiv.org/abs/0706.0365">proposed by McElgin</a> and <a href="https://arxiv.org/abs/1503.02067">confirmed by myself and Santachiara</a>.</li></ul>In order to be really interesting, a proof of the uniqueness of Liouville theory should also be able to deal with its non-uniqueness in such circumstances. Even better would be to suggest the existence of new generalizations of Liouville theory under some other assumptions. I will now summarize two known analytic approaches to the problem, before discussing the more recent <a href="https://arxiv.org/abs/1702.00423">numerical work by Collier, Kravchuk, Lin and Yin</a>. (Reviewing that work for JHEP was the original motivation for writing this post.)<br /><br /><h4 id="degenerate-fields-and-teschners-trick">Degenerate fields and Teschner’s trick</h4><h4 id="degenerate-fields-and-teschners-trick"> </h4>The basic idea of the conformal bootstrap approach is to solve crossing symmetry equations. The basic problem of the approach is that these equations involve sums over a basis of the spectrum, and such a basis is infinite is most cases, including in the case of Liouville theory. To circumvent this problem, an idea is to use crossing symmetry equations for four-point functions that involve one degenerate field. Since the fusion of a degenerate field with any other field produces only finitely many primary fields, the corresponding crossing symmetry equations involve finitely many terms. In the case of the simplest nontrivial degenerate fields, these equations actually involve only two terms, i.e. the four-point functions are combinations of only two conformal blocks. The equations can be written explicitly, and solved analytically.<br /><br />It is far from obvious that degenerate fields can be used in Liouville theory, since such fields are absent from the spectrum. Nevertheless, the assumption that such fields exist leads to the complete determination of the three-point structure constants, and the results agree with the DOZZ formula for these structure constants. The <a href="https://arxiv.org/abs/hep-th/9506136">original derivation</a> of the DOZZ formula relied on the Lagrangian formulation of Liouville theory, and therefore said nothing about uniqueness. In contrast, <a href="https://arxiv.org/abs/hep-th/9507109">Teschner’s degenerate crossing symmetry equations</a> imply a statement of uniqueness, under the assumption that degenerate fields exist.<br /><br />Having an analytic formula for structure constants makes it possible to study the appearance of discrete terms, when the conformal dimensions are analytically continued. Moreover, the degenerate crossing symmetry equations are very suggestive that non-analytic structure constants can exist for certain discrete values of the central charge. Restricting to <span class="math">\(c\leq 1\)</span> and writing <span class="math">\(c = 1-6(\beta - \frac{1}{\beta})^2\)</span> with <span class="math">\(\beta \in \mathbb{R}\)</span>, the degenerate crossing symmetry equations indeed dictate how the three-point structure constant behaves when momentums are shifted by <span class="math">\(\beta\)</span> or <span class="math">\(\frac{1}{\beta}\)</span>. (The momentum <span class="math">\(P\)</span> associated to the conformal dimension <span class="math">\(\Delta\)</span> is defined by <span class="math">\(\Delta = \frac{c-1}{24}+P^2\)</span>.) Assuming that the structure constant is a continuous function of dimensions, this determines it uniquely if <span class="math">\(\beta^2\)</span> is an irrational number. But uniqueness fails if <span class="math">\(\beta^2 = \frac{p}{q}\)</span> is rational, and this correspond to central charges where Liouville theory is known not to be unique. Of course, more work is needed in order to make more precise statements, including explaining why uniqueness fails if <span class="math">\(\beta^2\)</span> is a positive rational number, but not if it is a negative rational number.<br /><br /><h4 id="analytic-crossing-symmetry-equations">Analytic crossing symmetry equations</h4><h4 id="analytic-crossing-symmetry-equations"> </h4>Since the assumption that degenerate fields exist is strong, the corresponding statement of uniqueness is weak. What can we deduce from crossing symmetry of four-point functions that do not involve degenerate fields? In diagonal conformal field theories, crossing symmetry can be reduced to an algebraic relation between the three-point structure constant (<span class="math">\(C\)</span>) and the fusing matrix (<span class="math">\(F\)</span>),<br /><span class="math">\[C_{12s} C_{s34} F_{\Delta_s,\Delta_t}\begin{bmatrix} \Delta_2 & \Delta_3 \\ \Delta_1 & \Delta_4 \end{bmatrix} = C_{23t}C_{t41} F^{-1}_{\Delta_t,\Delta_s}\begin{bmatrix} \Delta_2 & \Delta_3 \\ \Delta_1 & \Delta_4 \end{bmatrix}\]</span><br />(See for example <a href="https://arxiv.org/abs/1406.4290">my review article</a> for the derivation.) This relation easily implies that the three-point structure constant is unique. However, it is difficult to determine under which assumptions that result holds. The derivation of our relation relies on the linear independence of a family of conformal blocks: this independence is intuitively obvious in the limit <span class="math">\(z\to 0\)</span>, where the blocks behave as powers of <span class="math">\(z\)</span>, although proving it rigorously in a well-defined space of functions may not be that easy.<br /><br />We could well accept the above relation in the case where all involved representations belong to the Liouville spectrum, and try to analytically continue it in conformal dimensions and/or the central charge, in order to find out when uniqueness breaks down. However, this would require working with the fusing matrix <span class="math">\(F\)</span>. While <a href="https://arxiv.org/abs/1202.4698">explicitly known in principle</a>, the fusing matrix is a complicated object, and not easy to work with.<br /><br />Therefore, the argument that Liouville theory is unique based on crossing symmetry equations with no degenerate fields may be more a formal manipulation than a genuine proof, and is hard to use for testing the limits of uniqueness.<br /><br /><h4 id="numerics-with-the-spectral-function">Numerics with the spectral function</h4><h4 id="numerics-with-the-spectral-function"> </h4>In their <a href="https://arxiv.org/abs/1702.00423">recent article</a>, Collier, Kravchuk, Lin and Yin build on ideas, technique and code from the ongoing conformal bootstrap renaissance, i.e. the successful numerical implementation of the conformal bootstrap methods to conformal field theories in arbitrary dimensions (not just two dimensions) that started about ten years ago. Their first implementation of the numerical bootstrap relies on the assumption of unitarity. This assumption implies that squares of three-point structure constants are positive, and crossing symmetry leads to inequalities on correlation functions. These inequalities are in general used for deriving bounds on conformal dimensions and/or structure constants. In their article, Collier, Kravchuk, Lin and Yin find it easier to derive bounds on the spectral function <span class="math">\(f(\Delta_*)\)</span>, which they define as the contribution of Virasoro primary fields with dimensions at most <span class="math">\(\Delta_*\)</span>, to a four-point function with cross-ratio <span class="math">\(z=\frac12\)</span>. Known numerical bootstrap techniques usually deal with discrete spectrums, and the use of the spectral function is an adaptation to the case of continuous spectrums.<br /><br />So they derive bounds on the spectral function, assuming not only unitarity, but also that the spectrum is diagonal, i.e. that all primary fields have zero spin. The spectral function of Liouville theory sits right in the middle of their bounds, although the allowed region is a bit large, and the authors admit that their bounds “have not quite converged convincingly”. They attribute this lack of convergence to the computational complexity of their method.<br /><br />As a result, this method produces only weak evidence that Liouville theory is unique. The method can also be used to study the analytic continuation of Liouville theory, and more specifically the four-point function of a scalar field whose dimension obeys <span class="math">\(\Delta \geq \frac{c-1}{32}\)</span> rather than the Liouville theory bound <span class="math">\(\Delta \geq \frac{c-1}{24}\)</span>. (The lower bound <span class="math">\(\Delta = \frac{c-1}{32}\)</span> corresponds to the appearance of the discrete terms.)<br /><br /><h4 id="second-numerical-approach-the-linear-method">Second numerical approach: the linear method</h4><h4 id="second-numerical-approach-the-linear-method"> </h4>The linear method, used in Section 3.1, consists in writing crossing symmetry as a linear equation for the squared structure constant <span class="math">\(C^2_\Delta\)</span>,<br /><span class="math">\[\forall z \in \mathbb{C}\ , \quad \int d\Delta\ C^2_\Delta \mathcal{F}_\Delta(z) = 0 \ ,\]</span><br />where <span class="math">\(\mathcal{F}_\Delta(z)\)</span> is a conformal block. Dealing with a continuous spectrum, and therefore a continuous set of conformal blocks, is not easy. The trick is to consider the conformal blocks not as a <span class="math">\(\Delta\)</span>-parametrized family of functions of <span class="math">\(z\)</span>, but as a <span class="math">\(z\)</span>-parametrized family of functions of <span class="math">\(\Delta\)</span>. After Taylor expanding near <span class="math">\(z=\frac12\)</span>, this family becomes discrete, and therefore easier to deal with. Schematically, the crossing symmetry equation becomes<br /><span class="math">\[\forall m \in \mathcal{M}\ , \quad \int d\Delta\ C^2_\Delta \mathcal{F}_m(\Delta) = 0 \ ,\]</span><br />with <span class="math">\(\mathcal{M}\)</span> some discrete set of parameters. There is also a normalization condition of the type <span class="math">\(\int d\Delta\ C^2_\Delta \mathcal{F}_0(\Delta)=1\)</span>, which eliminates the trivial solution <span class="math">\(C^2_\Delta = 0\)</span>.<br /><br />Given a finite subset <span class="math">\(\mathcal{M}_N\)</span> of <span class="math">\(\mathcal{M}\)</span>, the idea is to build an approximate solution <span class="math">\((C_N)^2_\Delta\)</span> as the linear combination of <span class="math">\(\mathcal{F}_0\)</span> and <span class="math">\((\mathcal{F}_m)_{m\in\mathcal{M}_N}\)</span> that satisfies the crossing symmetry equation for <span class="math">\(m\in\mathcal{M}_N\)</span>. Choosing the subsets <span class="math">\(\mathcal{M}_N\)</span> such that <span class="math">\(\lim_{N\to \infty} \mathcal{M}_N = \mathcal{M}\)</span>, the question is then whether <span class="math">\(\lim_{N\to \infty} (C_N)^2_\Delta\)</span> exists and agrees with the Liouville theory structure constant. (For this to work, it is actually necessary to rescale the conformal blocks, structure constant and integration measure <span class="math">\(d\Delta\)</span> by <span class="math">\(\Delta\)</span>-dependent normalization factors.) The result is that <span class="math">\((C_N)^2_\Delta\)</span> indeed converges relatively quickly towards the Liouville theory structure constant. The mathematical interpretation of this convergence is that <span class="math">\((\mathcal{F}_m)_{m\in\mathcal{M}}\)</span> form a basis of a certain space of functions of <span class="math">\(\Delta\)</span>, and their linear independence is the reason why Liouville theory is unique.<br /><br />What I do not understand is why the authors formulate this argument in terms of the spectral function. My impression is that the argument should work with the structure constant <span class="math">\(C^2_\Delta\)</span> itself, and that using the spectral function is an unnecessary complication.<br /><br />Now what are the consequences for Liouville theory? Keeping the assumption that the theory is diagonal, but dropping the assumption of unitarity, this argument provides good evidence that Liouville theory is unique. However, the argument is not rigorous, in particular when dealing with functional spaces. This argument may therefore be seen as a numerical counterpart of the argument with analytic crossing symmetry equations that I reviewed above.<br /><br /><h4 id="side-results">Side results</h4><h4 id="side-results"> </h4>In Section 3.2, Collier, Kravchuk, Lin and Yin analytically derive the spectral density of Liouville theory, as the only solution of modular invariance of the torus partition function. To do this, they need assume not only unitarity of the theory, but also consistency on the torus. The only useful piece of information that follows from such strong assumptions is a confirmation that the lower bound on the spectrum must be <span class="math">\(\frac{c-1}{24}\)</span>. This meagre result only reinforces my <a href="http://researchpracticesandtools.blogspot.fr/2014/09/modular-invariance-in-non-rational-cft.html">skepticism about using modular invariance in non-rational CFT</a>.<br /><br />In Section 3.3, the problem of higher multiplicities is analyzed, and it is argued that allowing finite multiplicities while insisting that the theory remains diagonal only allows <span class="math">\(N\times\)</span> Liouville theory, which the authors call Liouville theory with superselection sectors. The argument assumes that the theory is consistent on arbitrary Riemann surfaces, not only on the sphere.<br /><br /><h4 id="conclusion-3">Conclusion</h4><h4 id="conclusion-3"> </h4>The numerical bootstrap work of Collier, Kravchuk, Lin and Yin brings a modest increase to the already high confidence that Liouville theory is unique. In my opinion, their most promising method is the linear method, which uses fewer assumptions (in particular not unitarity) and converges much better than the bounds on the spectral function. In order to make the linear method easy to use, it would be good to simplify it by getting rid of spectral functions (if I am right that this is possible), and to release the corresponding code. The real justification for this method would be to find new CFTs with continuous spectrums, or at least new crossing-symmetric four-point functions, after relaxing or changing some of the assumptions that single out Liouville theory.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-87870100335802399242018-02-25T05:02:00.000-08:002018-02-25T05:02:58.985-08:00Couperin vs Springer and Elsevier: towards less extortionate deals?Historically, the French consortium Couperin has obtained poor results in negotiating with predatory publishers, mostly consenting to their high and increasing prices. This is not necessarily Couperin’s fault, although it does not help that Couperin’s leadership <a href="http://researchpracticesandtools.blogspot.fr/">appears weak and ill-informed</a>. Rather, this is a consequence of the basic economics of scientific publishing, with publishers systematically abusing their strong position. Even the Finnish consortium FinELib, which was determined to seek a good deal and enjoyed a fair amount of support from researchers, recently consented to one more extortionate deal with Elsevier.<br /><br />However, recent developments suggest that Couperin could fare better in current and upcoming negotiations:<br /><a name='more'></a><br />The availability of most articles via the pirate site Sci-Hub makes subscriptions less necessary, and the German consortium DEAL has shown how research institutions can take the upper hand in negotiations, while working towards a radical reform of scientific publishing. But is Couperin really prepared to emulate DEAL? Françoise Rousseau-Hans, head librarian at CEA, and responsible for prospective at Couperin, gave me some information on the issue.<br /><br /><h4 id="the-librarians-tale">The librarian’s tale</h4><h4 id="the-librarians-tale"> </h4>Françoise Rousseau-Hans and her colleagues are doing much detailed technical work in tracking journals’ prices, article downloads from users, publishers’ commercial practices, etc. For example, they are able to compare subscription prices with what articles would cost if accessed via pay per view.<br /><br />This also allows them to understand when publishers are trying to trick them. For example, here is a basic publisher’s trick: while subscriptions are to bundles rather than to individual journals, journals still have list prices, and these prices are used in negotiations as a basis for the bundle’s price. When a journal is due to exit the bundle, for example because it will become part of a different deal such as SCOAP3, the publisher anticipates this by decreasing its list price, and bringing it close to zero. The publisher indeed knows that after the journal exits the bundle, subscribers will ask for a corresponding decrease of the bundle’s price.<br /><br />Much of this information, and in particular how much each academic institution pays to each publisher, is typically kept private, although librarians share such information with one another. The reason for not publishing such information is to prevent publishers from comparing prices with one another.<br /><br />In the ongoing negotiation with Springer, Couperin is asking for a price decrease <a href="http://www.rnbm.org/point-dactualite-negociations-de-couperin-springer-insmi/">of the order of 10%</a>, arguing that this corresponds to articles that are already available because they were published in the Gold open access mode. The previous deal expired at the end of 2017, so negotiations are running late. As is common practice, subscribing institutions have formally asked for access to Springer journals to be cut off when the deal expired, so as not to be accused of tacitly agreeing to a contract extension. And as is common practice, Springer has declined to cut off access while negotiations are ongoing.<br /><br />In the case of Elsevier, a big deal that covers all of France for five years is set to expire at the end of 2018. Negotiations for a successor deal should begin in a few weeks. Given Elsevier’s usual practices, and Couperin’s resolve to save as much money as possible, a clash is likely. Now the current deal allows perpetual access to articles dated 2018 or earlier. So even if France enters 2019 with no new deal, only access to articles published in 2019 will be lost, with the rest remaining available via <a href="https://www.istex.fr/">ISTEX</a>.<br /><br />The resilience of French academic institutions in case of a clash with Elsevier would also depend on the attitude of the researchers. Attitudes can vary widely among disciplines and institutions. Researchers whose work have commercial applications, such as chemists and biologists, tend to be the least unhappy with the current practices of commercial publishers.<br /><br />Couperin still has to announce its strategy: does it want to progressively shrink its budget for subscriptions and support new publishers with the saved money, or to more aggressively force a transition to a Gold open access system like the DEAL consortium?<br /><br /><h4 id="the-bloggers-reaction">The blogger’s reaction</h4><br />I am impressed by the librarians’ technical work of tracking the details of journals’ prices and usage. However, I wonder how much this work affects the outcome of negotiations. The list prices of journals are largely arbitrary, and subscription prices have little to do with publishers’ costs. Predatory publishers may politely listen to librarians’ technical arguments, they may consent to face-saving symbolic rebates, but what prevents them from asking for as much money as they can squeeze out of subscribers?<br /><br />On the other hand, the librarians’ work could be useful in convincing researchers to take Couperin’s side in negotiations, which would mean not complaining when losing access to journals, and possibly even boycotting a recalcitrant publisher. But it looks hard to motivate researchers by just showing that prices are increasing well above inflation, for two reasons:<br /><ul><li>If the problem is framed in purely budgetary terms, the risk is that researchers view Couperin’s attitude as part of a squeeze in research budgets, and effectively take the side of publishers. After all, if money is saved on subscriptions, will it go to research?</li><li>The problem is not that prices increase by 5% each year when they should increase by 2%, but that legacy journals cost more than newer nonprofit journals by an order of magnitude. (Legacy journals effectively sell each article for about 5000 euros in Germany, whereas SciPost’s costs are about 300 euros per article.) Researchers are more likely to be mobilized by crude order-of-magnitude calculations, than by fine analyses of price evolutions.</li></ul>The DEAL consortium’s approach is to publicly discuss not only money, but also the quality of research publishing, for example when they question <a href="http://researchpracticesandtools.blogspot.fr/2018/01/germany-wont-pay-for-natures-scientific.html#more">Nature’s editorial practices</a>. And DEAL’s negotiating positions explicitly take into account such qualitative issues, including of course open access.<br /><br />To summarize, my impression is that Couperin’s work is focussed too much on negotiating with publishers, and too little on exchanging with researchers. Couperin would probably do well to publicly release more information, rather than assuming that publishers do not talk to one another. For a start, Couperin’s negotiating aims should be made public as early as possible. Researchers are unlikely to join a fight against a publisher if they do not know what they are actually fighting for.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-76445318927407661252018-01-27T05:08:00.000-08:002018-01-28T02:05:46.458-08:00Germany won't pay for Nature's "scientific porn", and other messages from Couperin's open science daysEarlier this week, there was a mini-workshop in Paris called Couperin’s open science days 2018. (Original title in French: Journées sciences ouverte 2018.) I followed most of it via webcast, and I will now summarize some of the salient points. The videos are <a href="https://webcast.in2p3.fr/container/journees-science-ouverte-couperin-2018">available online</a>, but most of them are in French.<br /><br /><h4 id="the-german-way-horst-hippler-and-ralf-schimmer">The German way: <a href="https://webcast.in2p3.fr/video/changement-du-paradigme-publier-pour-etre-lu">Horst Hippler</a> and <a href="https://webcast.in2p3.fr/video/leveraging-bibliodiversity-transforming-the-journal-system-and-shifting-our-spending-from-subscription-to-open-access">Ralf Schimmer</a></h4><h4 id="the-german-way-horst-hippler-and-ralf-schimmer"> </h4>The most important messages came from Germany: the country whose academic institutions have thought seriously about scientific publishing, and have organized themselves so as to drive the needed reforms. The most salient manifestation so far has been the <a href="http://researchpracticesandtools.blogspot.fr/2017/02/germany-vs-elsevier-and-race-for-legal.html">standoff with Elsevier</a>, and it was nice to have further details on the strategy.<br /><a name='more'></a><br /><br />The basic idea is that nobody should pay for reading articles. The German DEAL consortium wants to pay for publishing instead, according to the Gold open access model. Current subscriptions prices value each published article around 5000 euros, and the aim is to eventually pay of the order of 1000-2000 euros per article. The price should be the same for all journals. The consortium is negotiating with the publishers along these lines, with contracts that would last one year (rather than several years for the traditional big deals), and a transition of at most three years from the current system to the desired system. The negotiating tactic amounts to waiting for publishers to accept DEAL’s demands, without caring about subscriptions running out.<br /><br />Now comes the statement by Horst Hippler that gives its title to this post: Nature’s editorial process gives much power to in-house non-specialist editors, and is not scientifically sound. Nature is therefore not a scientific journal, and should be considered, according to an unnamed colleague, as "scientific pornography". Consequently, the consortium does not plan to include Nature in the deals that it is negotiating. Hippler’s calm demeanour and implacable logic may explain why no more people were knocked off their chairs when hearing that.<br /><br />Ralf Schimmer summarized the approach’s basic principle as an open access mandate for money, rather than for researchers: money spent on scientific publications should always come with an open access requirement. The discussion about Nature shows that there may be quality requirements as well.<br /><br />It was impressive that such radical ideas were formulated by officials who are involved in actually implementing them. What matters is now that other countries follow. The Dutch official <a href="https://webcast.in2p3.fr/video/disruptive-innovation-in-scholarly-communications">Koen Becking</a> said that Netherlands’ VNSU plans to do just this in upcoming negotiations with Elsevier. However, the French and Spanish consortiums, while ready to rhetorically embrace the German ideas (and other fashionable ideas), do not plan to do anything similar, as was clear from their representatives’ interventions.<br /><br /><h4 id="french-hypocrisy">Couperin's powerlessness</h4><h4 id="french-hypocrisy"> </h4><a href="https://webcast.in2p3.fr/video/les-actions-de-promotion-de-l-open-science-au-niveau-europeen">Jean-Pierre Finance</a> heads Couperin, the French consortium that was organising the workshop. He was talking about "initiatives in favour of open science". Discussions, workshops, initiatives: Couperin is ready to do that much for open science, provided the usual business of big deals with legacy publishers is not disturbed.<br /><br />Jean-Pierre Finance was speaking right after <a href="https://webcast.in2p3.fr/video/ouverture-de-la-conference-journees-science-ouverte-couperin">Vanessa Proudman</a>, whose first order of business was to denounce the projected European directive on copyright as a threat to open science. Finance was then only slightly embarrassed to read a slide that praised the directive for going in the right direction! Finance positioned himself as a cheerleader for anything that claims to be about open science, as if Couperin had no power to do anything substantial about it. His conclusions are that many things are happening, and many actors are involved. (I am not caricaturing.)<br /><br />So Couperin apparently plans to remain a friendly-face tax collector for predatory publishers. When directly asked, after Hippler’s talk, what he thought of the German way, Finance only gave an evasive answer. Couperin’s powerlessness is probably congenital: the consortium’s mission is to sign deals, and no deal is not an option. But do not count on Finance for acknowledging it and proposing remedies.<br /><br /><h4 id="other-noteworthy-soundbites">Other noteworthy soundbites</h4><h4 id="other-noteworthy-soundbites"> </h4><a href="https://webcast.in2p3.fr/video/open-access-and-beyond-scipost">Jean-Sébastien Caux</a> gave a dynamic and entertaining speech about his work on founding and operating <a href="http://researchpracticesandtools.blogspot.fr/2016/10/publishing-in-scipost-must.html">SciPost</a>. The insistence on open peer review (on top of open access) was most welcome. He explained that SciPost positions itself as a top-quality journal that largely emulates traditional journals (while being independent and much cheaper), although he is well aware that our current notion of an article is artificial and possibly doomed. Publishing an article costs SciPost about 300 euros. And the plan is to release the code behind SciPost in a few month.<br /><br /><a href="https://webcast.in2p3.fr/video/vers-une-methode-devaluation-rigoureuse-et-credible-de-la-recherche-et-des-chercheurs-la-piste-europeenne-skills-rewards">Bernard Rentier</a> started with a thorough demolition of the impact factor, concluding that anyone who uses it for evaluating researchers should be fired. He then embarked in proposing an admittedly complicated system of evaluation.<br /><br /><a href="https://webcast.in2p3.fr/video/acces-ouvert-en-mathematiques-un-panorama-et-un-exemple-demancipation">Benoît Kloeckner</a> said that paying the "centre Mersenne" for formatting articles costs 7 euros per page. Such data about costs should of course be compared to the 5000 euros per article that legacy publishers are charging.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-67164397029993038362018-01-21T12:30:00.000-08:002018-01-23T07:28:13.399-08:00Will no one rid me of these tiresome Latin plurals?The English language has inherited many scientifc words from Latin: a <i>spectrum</i>, an <i>index</i>, a <i>torus</i>, a <i>formula</i>. Then which plural forms should we use: the Latin plurals two <i>spectra</i>, two <i>indices</i>, two <i>tori</i>, two <i>formulae</i>? Or the English plurals two <i>spectrums</i>, two <i>indexes</i>, two <i>toruses</i>, two <i>formulas</i>? The Latin and the English plurals of these words are both considered correct, but the Latin plurals are more widespread. I will nevertheless argue that using Latin plurals is impractical and illogical, and should often be avoided.<br /><br /><a name='more'></a><br />The main argument against Latin plurals is that irregular grammar makes a language more difficult. And the main argument in favour of irregular grammar is that it contributes to the beauty and richness of a language, and reflects its history. For a global technical idiom such as scientific English, this esthetic argument carries little weight. And the irregularities are particularly detrimental, because most users are not native English speakers, and have to learn the idiom as adults.<br /><br />Moreover, there is something basically illogical in using Latin plurals. The plural form is a type of declension, but in the Latin language there are also declensions for cases. If these <i>spectra</i> are interesting, should we study the features of these <i>spectrorum</i>? Declensions in English are much simpler, but there is still a possessive case. When talking about these <i>spectra’s</i> features, I am adding an English possessive ending to a Latin nominative plural. It would be more logical to only borrow nominative singulars from Latin, and to let them follow English grammar, including English plurals.<br /><br />Still, there are cases when Latin plurals are hard to renounce:<br /><ul><li>The English plural may be unwieldy. For example, <i>genuses</i> and <i>toruses</i> are ugly, but <i>genera</i> and <i>tori</i> sound better.</li><li>The English plural may coincide with a verb, and the Latin plural may lift the ambiguity. For example, if we take the plural of <i>index</i> to be <i>indexes</i> rather than <i>indices</i>, then the word <i>indexes</i> is both a noun and a verb. Of course, noun/verb ambiguities are a major bug of the English language, and Latin plurals can only help in a few cases.</li></ul>To conclude, here are examples of Latin plurals that should surely be eliminated:<br /><ul><li><i>minima</i> <span class="math inline">\(\to\)</span> <i>minimums</i></li><li><i>spectra</i> <span class="math inline">\(\to\)</span> <i>spectrums</i></li><li><i>formulae</i> <span class="math inline">\(\to\)</span> <i>formulas</i></li><li><i>tetrahedra</i> <span class="math inline">\(\to\)</span> <i>tetrahedrons</i></li><li><i>ansätze</i> <span class="math inline">\(\to\)</span> <i>ansatzes</i> (this one is German, not Latin)</li></ul>And examples of Latin plurals that we might want to spare:<br /><ul><li><i>indices</i> / <i>indexes</i>?</li><li><i>genera</i> / <i>genuses</i>?</li><li><i>tori</i> / <i>toruses</i>?</li><li><i>matrices </i>/<i> matrixes?</i> </li></ul>Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-82869470749781190182018-01-11T14:03:00.000-08:002018-01-11T14:05:41.622-08:00On single-valued solutions of differential equationsThis post is about the issue of <a href="https://mathoverflow.net/questions/285544/solve-this-nonlinear-matrix-equation">solving a nonlinear matrix equation</a> that I raised on MathOverflow. This matrix equation determines the existence of single-valued solutions of certain meromorphic differential equations. The motivating examples are the BPZ differential equations that appear in two-dimensional CFT. For more details on these examples, see my recent article with Santiago Migliaccio on <a href="https://arxiv.org/abs/1711.08916">the analytic bootstrap equations of non-diagonal two-dimensional CFT</a>.<br /><a name='more'></a><h4 id="relation-between-diagonal-and-non-diagonal-solutions"> </h4><h4 id="relation-between-diagonal-and-non-diagonal-solutions">Relation between diagonal and non-diagonal solutions</h4><i>(This paragraph is adapted from Appendix A of the cited article.)</i><br />Let <span class="math inline">\(D^+\)</span> and <span class="math inline">\(D^-\)</span> be two meromorphic differential operators of order <span class="math inline">\(n\)</span> on the Riemann sphere. Let a non-diagonal solution of <span class="math inline">\((D^+,D^-)\)</span> be a single-valued function <span class="math inline">\(f\)</span> such that <span class="math inline">\(D^+f = \bar{D}^- f=0\)</span>, where <span class="math inline">\(\bar D^-\)</span> is obtained from <span class="math inline">\(D^-\)</span> by <span class="math inline">\(z\to\bar z,\partial_z \to \partial_{\bar z}\)</span>. Let a diagonal solution of <span class="math inline">\(D^+\)</span> be a single-valued function <span class="math inline">\(f\)</span> such that <span class="math inline">\(D^+f =\bar D^+f=0\)</span>.<br />We assume that <span class="math inline">\(D^+\)</span> and <span class="math inline">\(D^-\)</span> have singularities at two points <span class="math inline">\(0\)</span> and <span class="math inline">\(1\)</span>. Let <span class="math inline">\((\mathcal F^\epsilon_i)\)</span> and <span class="math inline">\((\mathcal{G}^\epsilon_i)\)</span> be bases of solutions of <span class="math inline">\(D^\epsilon f=0\)</span> that diagonalize the monodromies around <span class="math inline">\(0\)</span> and <span class="math inline">\(1\)</span> respectively. In the case of <span class="math inline">\((\mathcal F^+_i)\)</span> this means <span class="math display">\[\begin{aligned} D^+ \mathcal{F}^+_i = 0 \quad , \quad \mathcal{F}^+_i \left(e^{2\pi i}z\right) = \lambda_i \mathcal{F}^+_i(z)\ .\end{aligned}\]</span> We further assume that our bases are such that <span class="math display">\[\begin{aligned} \forall \epsilon,\bar{\epsilon}\in\{+,-\}\, , \quad \left\{ \begin{array}{l} \mathcal{F}^\epsilon_i(z) \mathcal{F}^{\bar\epsilon}_j(\bar z) \ \text{has trivial monodromy around } z=0 \ \ \iff \ \ i=j\ , \\ \mathcal{G}^\epsilon_i(z) \mathcal{G}^{\bar\epsilon}_j(\bar z) \ \text{has trivial monodromy around } z=1 \ \ \iff \ \ i=j\ . \end{array}\right. \label{tmo}\end{aligned}\]</span> For <span class="math inline">\(\epsilon \neq \bar{\epsilon}\)</span> this is a rather strong assumption, which implies that the operators <span class="math inline">\(D^+\)</span> and <span class="math inline">\(D^-\)</span> are closely related to one another. This assumption implies that a non-diagonal solution <span class="math inline">\(f^0\)</span> has expressions of the form <span class="math display">\[\begin{aligned} f^0(z,\bar z) = \sum_{i=1}^n c^0_i \mathcal{F}_i^+(z) \mathcal{F}_i^-(\bar z) = \sum_{i=1}^n d^0_i \mathcal{G}^+_i(z) \mathcal{G}_i^-(\bar z)\ , \label{fz}\end{aligned}\]</span> for some structure constants <span class="math inline">\((c^0_i)\)</span> and <span class="math inline">\((d^0_i)\)</span>. Similarly, a diagonal solution <span class="math inline">\(f^\epsilon\)</span> of <span class="math inline">\(D^\epsilon\)</span> has expressions of the form <span class="math display">\[\begin{aligned} f^\epsilon(z,\bar z) = \sum_{i=1}^n c^\epsilon_i \mathcal{F}_i^\epsilon(z) \mathcal{F}_i^\epsilon(\bar z) = \sum_{i=1}^n d^\epsilon_i \mathcal{G}^\epsilon_i(z) \mathcal{G}_i^\epsilon(\bar z)\ . \label{fe}\end{aligned}\]</span> We now claim that<br /><blockquote>if <span class="math inline">\(D^+\)</span> and <span class="math inline">\(D^-\)</span> have diagonal solutions, and if moreover <span class="math inline">\((D^+,D^-)\)</span> has a non-diagonal solution, then the non-diagonal structure constants are geometric means of the diagonal structure constants, <span class="math display">\[\begin{aligned} (c^0_i)^2 \propto c^+_ic^-_i\ , \label{ccc} \end{aligned}\]</span> where <span class="math inline">\(\propto\)</span> means equality up to an <span class="math inline">\(i\)</span>-independent prefactor.</blockquote>The proof of this statement is simple bordering on the trivial. We introduce the size <span class="math inline">\(n\)</span> matrices <span class="math inline">\(M^\epsilon\)</span> such that <span class="math display">\[\begin{aligned} \mathcal{F}^\epsilon_i = \sum_{j=1}^n M^\epsilon_{i,j} \mathcal{G}^\epsilon_j \ .\end{aligned}\]</span> Inserting this change of bases in our expression for a diagonal solution, we must have <span class="math display">\[\begin{aligned} j\neq k \implies \sum_{i=1}^n c^\epsilon_i M_{i,j}^\epsilon M_{i,k}^\epsilon = 0\ .\end{aligned}\]</span> For a given <span class="math inline">\(\epsilon\)</span>, this is a system of <span class="math inline">\(\frac{n(n-1)}{2}\)</span> linear equations for <span class="math inline">\(n\)</span> unknowns <span class="math inline">\(c^\epsilon_i\)</span>. One way to write the solution is <span class="math display">\[\begin{aligned} c^\epsilon_i \propto (-1)^i\det_{\substack{ i'\neq i \\ j \neq 1}} \left( M^\epsilon_{i',1}M^\epsilon_{i',j} \right) = (-1)^i \left(\prod_{i'\neq i} M^\epsilon_{i',1}\right) \det_{\substack{ i'\neq i \\ j \neq 1}} \left( M^\epsilon_{i',j} \right)\ .\end{aligned}\]</span> Similarly, inserting the change of bases in the expression of a non-diagonal solution, we find <span class="math display">\[\begin{aligned} j\neq k \implies \sum_{i=1}^n c^0_i M_{i,j}^+ M_{i,k}^- = 0\ .\end{aligned}\]</span> We will write two expressions for the solution of this linear equations, <span class="math display">\[\begin{aligned} c^0_i &\propto (-1)^i \det_{\substack{ i'\neq i \\ j \neq 1}} \left( M^-_{i',1}M^+_{i',j}\right) = (-1)^i \left(\prod_{i'\neq i} M^-_{i',1}\right) \det_{\substack{ i'\neq i \\ j \neq 1}} \left( M^+_{i',j} \right)\ , \\ &\propto (-1)^i \det_{\substack{ i'\neq i \\ j \neq 1}} \left( M^+_{i',1}M^-_{i',j}\right) = (-1)^i \left(\prod_{i'\neq i} M^+_{i',1}\right) \det_{\substack{ i'\neq i \\ j \neq 1}} \left( M^-_{i',j} \right)\ .\end{aligned}\]</span> Writing <span class="math inline">\((c^0_i)^2\)</span> as the product of the above two expressions, we obtain the announced relation.<br /><br /><h4 id="existence-of-solutions">Existence of solutions</h4>We assume that a solution is single-valued if and only if it has trivial monodromies around <span class="math inline">\(0\)</span> and <span class="math inline">\(1\)</span>. Then the existence of diagonal and non-diagonal solutions depend on our matrices <span class="math inline">\(M^\epsilon\)</span>. Let us rewrite our solutions in terms of these matrices and their inverses: <span class="math display">\[\begin{aligned} c^\epsilon_i \propto \frac{ N^\epsilon_{i,1} }{ M^\epsilon_{i,1} }\quad , \quad c^0_i \propto \frac{ N^+_{i,1} }{ M^-_{i,1} } \propto \frac{N^-_{i,1}}{M^+_{i,1}}\ ,\end{aligned}\]</span> where we define <span class="math inline">\(N^\epsilon\)</span> as the transpose of the inverse of <span class="math inline">\(M^\epsilon\)</span>. This rewriting assumes that matrix elements of <span class="math inline">\(M^\epsilon\)</span> do not vanish: otherwise, we can have special solutions, which we will ignore.<br />Our expressions for the solutions depend on the choice of a particular second index, which we took to be <span class="math inline">\(1\)</span>. The condition for solutions to actually exist is that they do not depend on this choice. In the case of diagonal solutions, the condition is <span class="math display">\[\begin{aligned} \frac{N^\epsilon_{i_1,j_1}N^\epsilon_{i_2,j_2}}{N^\epsilon_{i_2,j_1}N^\epsilon_{i_1,j_2}} = \frac{M^\epsilon_{i_1,j_1}M^\epsilon_{i_2,j_2}}{M^\epsilon_{i_2,j_1}M^\epsilon_{i_1,j_2}}\ .\end{aligned}\]</span> In the case of non-diagonal solutions, the condition is <span class="math display">\[\begin{aligned} \frac{N_{i_1,j_1}^+}{N_{i_2,j_1}^+} \frac{M_{i_1,j_2}^+}{M_{i_2,j_2}^+} = \frac{N_{i_1,j_2}^-}{N_{i_2,j_2}^-} \frac{M_{i_1,j_1}^-}{M_{i_2,j_1}^-}\ .\end{aligned}\]</span> Summing over <span class="math inline">\(i_1\)</span> in this equation leads to <span class="math inline">\(\frac{\delta_{j_1,j_2}}{N^+_{i_2,j_1}M^+_{i_2,j_2}} = \frac{\delta_{j_1,j_2}}{N^-_{i_2,j_2}M^-_{i_2,j_1}}\)</span>. We call this the resummed condition, and write is as <span class="math display">\[\begin{aligned} \forall i,j, \ \ M^+_{i,j} \left((M^+)^{-1}\right)_{j,i} = M^-_{i,j} \left((M^-)^{-1}\right)_{j,i}\ .\end{aligned}\]</span> If we sum over <span class="math inline">\(i\)</span> or <span class="math inline">\(j\)</span>, we obtain identities that automatically hold. So we have <span class="math inline">\((n-1)^2\)</span> independent equations. This matches the number of compatibility conditions in our original system of <span class="math inline">\(n(n-1)\)</span> linear equations for the <span class="math inline">\(n\)</span> coefficients <span class="math inline">\((c^0_i)\)</span>.<br />Let us call two matrices <span class="math inline">\(M^1,M^2\)</span> equivalent if there are vectors <span class="math inline">\((\rho_i),(\sigma_j)\)</span> such that <span class="math inline">\(M^1_{i,j}=\rho_i M^2_{i,j}\sigma_j\)</span>. Then <span class="math inline">\(M_{i,j}(M^{-1})_{j,i}\)</span> is invariant under this equivalence. Modulo equivalence, the resummed condition has two simple universal solutions:<br /><ul><li><span class="math inline">\(M^+ = M^-\)</span>: then non-diagonal solutions exist if and only if diagonal solutions exist.</li><li><span class="math inline">\(M^+\)</span> is the transpose of the inverse of <span class="math inline">\(M^-\)</span>: then <span class="math inline">\(c_i^0\)</span> is an <span class="math inline">\(i\)</span>-independent constant, non-diagonal solutions always exist, but we do not know about diagonal solutions.</li></ul>Actually, for matrices of size two, these solutions are equivalent to each other, and give the general solution of our conditions. And diagonal solutions always exist.<br /><br /><h4 id="matrices-of-size-three">Matrices of size three</h4>By a direct calculation, the condition for a diagonal solution to exist is <span class="math display">\[\begin{aligned} \det \frac{1}{M} = 0 \ , \end{aligned}\]</span> where <span class="math inline">\(\frac{1}{M}\)</span> is the matrix whose coefficients are the inverses of those of <span class="math inline">\(M\)</span>.<br />We compute <span class="math display">\[\begin{aligned} M = \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix} \implies M_{ij}M^{-1}_{ji} = \begin{pmatrix} t_1-t_2 & t_3-t_4 & t_5 - t_6 \\ t_5-t_4 & t_1-t_6 & t_3-t_2 \\ t_3-t_6 & t_5-t_2 & t_1-t_4 \end{pmatrix}\ ,\end{aligned}\]</span> where we introduced the combinations <span class="math display">\[\begin{aligned} t_1 = \frac{aei}{\det M} \ , \ t_2 = \frac{afh}{\det M} \ , \ t_3 = \frac{bfg}{\det M} \ , \ t_4 = \frac{bdi}{\det M} \ , \ t_5 = \frac{cdh}{\det M}\ , \ t_6 = \frac{ceg}{\det M}\ ,\end{aligned}\]</span> which obey the relations <span class="math display">\[\begin{aligned} t_1 +t_3 +t_5 - (t_2+t_4+t_6) = 1\quad , \quad t_1t_3t_5 = t_2t_4t_6\ .\end{aligned}\]</span> We also introduce the quadratic combination <span class="math display">\[\begin{aligned} \kappa = t_1t_3 +t_3t_5 + t_5t_1 - t_2t_4-t_4t_6 - t_2t_6 = t_1t_3t_5 \det \frac{1}{M}\det M\ , \end{aligned}\]</span> where <span class="math display">\[\begin{aligned} \det \frac{1}{M} \det M = \frac{1}{t_1}+\frac{1}{t_3}+\frac{1}{t_5} -\left( \frac{1}{t_2}+\frac{1}{t_4}+\frac{1}{t_6}\right) \ .\end{aligned}\]</span> If two matrices <span class="math inline">\(M^+\)</span> and <span class="math inline">\(M^-\)</span> have the same <span class="math inline">\(M_{ij}M^{-1}_{ji}\)</span>, then <span class="math inline">\(t_i^+ = t_i^- + c\)</span> for some <span class="math inline">\(c\)</span>, which implies in particular <span class="math display">\[\begin{aligned} c(\kappa^+ - \kappa^-) = 0\ .\end{aligned}\]</span> We assume that all the coefficients of <span class="math inline">\(M^+\)</span> and <span class="math inline">\(M^-\)</span> are nonzero, and distinguish two cases:<br /><ul><li>If <span class="math inline">\(c=0\)</span>, then <span class="math inline">\(t^+_i=t^-_i\)</span>, which implies that <span class="math inline">\(M^+\)</span> and <span class="math inline">\(M^-\)</span> are equivalent.</li><li>If <span class="math inline">\(c\neq 0\)</span>, then <span class="math inline">\(\det\frac{1}{M^+}=0\iff \det\frac{1}{M^-}=0\)</span>. In the special case where <span class="math inline">\(M^+\)</span> is equivalent to the inverse transpose of <span class="math inline">\(M^-\)</span>, we have <span class="math inline">\(c=\kappa^+=-\kappa^-\)</span>.</li></ul>In both cases, assuming non-diagonal solutions, we have diagonal solutions for one equation if and only if we have such solutions for the other equation.<br /><br /><h4 id="conclusion">Conclusion</h4>There is an intriguing piece of mathematics to be explored here, with a wealth of special cases where some coefficients of our matrices vanish. The relevance to two-dimensional CFT is however apparently not that great, because in order to solve CFTs with Virasoro symmetry, second-order BPZ equations are typically enough.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-50232795247792667072017-12-07T15:01:00.000-08:002018-03-27T00:33:42.460-07:00After Elsevier, should we boycott Springer?While the ongoing <a href="http://thecostofknowledge.com/">“Cost of knowledge”</a> boycott of Elsevier may not be very effective, the likely “no deal” hard exit of Germany from Elsevier subscriptions renews the boycott’s relevance, and maybe its urgency. It is indeed likely that most German universities and research institutions will <a href="https://www.timeshighereducation.com/news/germany-edges-towards-brink-dispute-elsevier">lose access to Elsevier articles</a> in 2018.<br /><br />As a researcher, why would I continue publishing in journals that are in principle inaccessible to most of my German colleagues? Universal access to the literature via <a href="https://en.wikipedia.org/wiki/Sci-Hub">Sci-Hub</a> is under increasing <a href="https://www.eff.org/deeplinks/2017/11/another-court-overreaches-site-blocking-order-targeting-sci-hub">legal assault</a> and should not be taken for granted. In these circumstances, boycotting Elsevier is no longer only a matter of fighting an obnoxious publisher, but also a basic necessity of ensuring that articles are accessible to their intended audience. (Unless one thinks that the intended audience is not the scientific community, but the paying Elsevier subscribers.)<br /><br />Now it turns out that if I boycott Elsevier because of Germany, I may have to boycott Springer because of France. <br /><a name='more'></a>Like many colleagues, I have recently received an email from <a href="http://www.cnrs.fr/fr/organisme/organigramme/fiches/instit-inp.html">Alain Schuhl</a>, saying that the negotiations of the French national consortium Couperin with Springer are not going well, and that we should prepare for losing access to Springer journals in 2018.<br /><br />Given Couperin’s history of acceeding to publishers’ diktats, and the rather late and confidential nature of the email from Alain Schuhl, it seems likely that Couperin will eventually capitulate, possibly after a token fight and a temporary suspension of subscriptions. I have no hint that Couperin has undertaken the kind of preparations for a no deal scenario, that made the German DEAL consortium’s firm negotiating stance possible. Moreover, this being France, there is always the possibility that the relevant ministry takes over the negotiations and imposes an unfavourable deal, as has happened in the past.<br /><br />Deal or no deal, the case for a Springer boycott is now getting stronger. But can we really afford to boycott both Elsevier and Springer? This probably depends in which field one works. In the case of physics, I have done a partial list of the main journals and journal families, by looking at recent publications from <a href="https://www.ipht.fr/en/">IPhT</a>, using <a href="ttps://hal-cea.archives-ouvertes.fr/search/index">HAL-CEA</a>. (Yes, HAL can sometimes be useful.) And here is the list:<br /><blockquote><b>Elsevier:</b> Nuclear Physics, Physics Letters, Physics Reports, Physica<br /><b>Springer:</b> Journal of High-Energy Physics, Journal of Statistical Physics, European Physical Journal, Communications in Mathematical Physics<br /><b>World Scientific:</b> International Journal of Modern Physics<br /><b>Institute of Physics:</b> Journal of Statistical Mechanics, Journal of Physics, Europhysics Letters<br /><b>American Physical Society:</b> Physical Review<br /><b>Independent:</b> SciPost Physics</blockquote>So, in physics, there are probably enough good alternatives to do without both Elsevier and Springer.<br /><br />Now, I must admit that my argument for boycotts gets weaker when one uses arXiv for communicating with colleagues, and publishes in journals for administrative purposes only. In this case, subscriptions become pointless, a fact that the consortiums that negotiate said subscriptions would do well to take into account.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-53374056186702563282017-10-23T05:30:00.000-07:002017-10-23T05:30:17.556-07:00With weight-shifting operators, \(d\neq 2\) looks increasingly like \(d=2\) in CFTWhen working on conformal field theory, your life is very different depending on whether the dimension is two or not. In <span class="math inline">\(d=2\)</span> you have that infinite-dimensional symmetry algebra called the Virasoro algebra, and in some important cases such as minimal models you can classify your CFTs, and solve them analytically. In <span class="math inline">\(d\neq 2\)</span>, your symmetry algebra is finite-dimensional, and you mostly have to do with numerical results. This not only makes you code a lot, but also incites you to make technical assumptions that are physically restrictve, such as unitarity.<br /><br /><h4 id="degenerate-fields-in-d2-cft">Degenerate fields in <span class="math inline">\(d=2\)</span> CFT</h4><h4 id="degenerate-fields-in-d2-cft"> </h4><br />What makes <span class="math inline">\(d=2\)</span> CFT solvable in many cases is the existence of degenerate primary fields. <br /><a name='more'></a>These fields were originally characterized in terms of the corresponding representations of the Virasoro algebra: a primary field is degenerate if the corresponding representation has a singular vector. A better characterization, that works not only for the Virasoro algebra but also for larger symmetry algebras such as W-algebras, is that a primary field is degenerate if its OPE with any other field yields only finitely many primary fields. In other words, it is better to characterize degenerate representations not in terms of their structure, but in terms of their fusion products. From this characterization, it follows that OPEs of degenerate fields yield degenerate fields. So, while there exist infinitely many degenerate fields, it is enough to study a few basic fields that generate all the others via their OPEs. In Virasoro-symmetric CFT, there are <span class="math inline">\(2\)</span> basic degenerate fields called <span class="math inline">\(V_{(2,1)}\)</span> and <span class="math inline">\(V_{(1,2)}\)</span>, and infinitely many degenerate fields called <span class="math inline">\(V_{(r,s)}\)</span> with <span class="math inline">\(r,s\in\mathbb{N}^*\)</span>.<br /><br />There are two types of results that can be derived using degenerate fields: universal results on conformal blocks, and results on correlation functions of specific models. These results are obtained by inserting a degenerate field in a block or correlation function of interest, see <a href="https://arxiv.org/abs/0902.1331">this article</a> for examples of results on conformal blocks, and <a href="https://arxiv.org/abs/1609.09523">this review</a> for computing correlation functions in Liouville theory and minimal models. Degenerate fields can be used irrespective of their appearance in the spectrum of a given model: in particular, they can be used for solving Liouville theory, whose spectrum does not include any degenerate field.<br /><br />OPEs of our basic degenerate fields with a primary field of momentum <span class="math inline">\(P\)</span> look like <span class="math display">\[V_{(2,1)} V_P \sim V_{P-\frac{b}{2}} + V_{P+\frac{b}{2}} \quad , \quad V_{(1,2)} V_P \sim V_{P-\frac{1}{2b}} + V_{P+\frac{1}{2b}}\]</span> where the momentum <span class="math inline">\(P\)</span> and parameter <span class="math inline">\(b\)</span> are functions of the conformal dimension and central charge respectively. Analytic bootstrap equations that are deduced from degenerate fields relate momentums by shifting them as in the above OPEs. In particular, for <span class="math inline">\(b^2\notin \mathbb{R}-\mathbb{Q}\)</span>, the shifts involve two incommensurable quantities <span class="math inline">\(\frac{b}{2}\)</span> and <span class="math inline">\(\frac{1}{2b}\)</span>, and this is why Liouville theory’s correlation functions can be completely determined.<br /><br /><h4 id="weight-shifting-operators-in-dneq-2-cft">Weight-shifting operators in <span class="math inline">\(d\neq 2\)</span> CFT</h4><br />In a <a href="https://arxiv.org/abs/1706.07813">recent article</a>, Karateev, Kravchuk and Simmons-Duffin have introduced weight-shifting operators in <span class="math inline">\(d\neq 2\)</span> CFT, whose similarity with <span class="math inline">\(d=2\)</span> degenerate fields is striking. These operators correspond to finite-dimensional representations of the conformal algebra, so an OPE of such an operator with any other field only involves finitely many primary fields. In any given dimension, there exists a finite set of basic weight-shifting operators that generate all the others: for example, in <span class="math inline">\(d=3\)</span>, this set is made of just one operator, which is associated to the spinor representation. And weight-shifting operators can be used in any CFT, whether or not they belong to the spectrum.<br /><br />The main motivation for introducing weight-shifting operators is to simplify calculations of conformal blocks, more specifically to reduce conformal blocks that involve spinning fields, to simpler conformal blocks. Then a natural question is whether weight-shifting operators could also contribute to solving specific models. To answer this question, let us look at the OPE of a weight-shifting operator <span class="math inline">\(W_j\)</span> with a primary field <span class="math inline">\(V_\Delta\)</span> of conformal dimension <span class="math inline">\(V_\Delta\)</span>. According to eq. (2.12), the OPE shifts dimensions by half-integers, <span class="math display">\[W_j V_\Delta \sim \sum_{i=-j}^j V_{\Delta+i}\]</span> where <span class="math inline">\(j\in\frac12 \mathbb{N}\)</span> describes the properties of <span class="math inline">\(W_j\)</span> under conformal transformations, and <span class="math inline">\(i\)</span> runs by increments of one. It follows that analytic bootstrap equations deduced from weight-shifting operators would shift dimensions by integers. But in reasonably non-trivial CFTs, differences of field dimensions are not always integer. So there is little hope of fully solving CFTs by adapting the <span class="math inline">\(d=2\)</span> analytic bootstrap method. Adapting this method would only be useful in CFTs with series of fields whose dimensions differ by integers. Examples of such CFTs include <span class="math inline">\(d=2\)</span> Virasoro-symmetric CFTs, if we forget the Virasoro symmetry and only use the global conformal symmetry.<br /><br /><h4 id="conclusion">Conclusion</h4><br />Weight-shifting operators are promising tools for computing conformal blocks, and less promising for solving specific CFTs. If they indeed lead to much simpler computations than other methods, one may wonder why they were intoduced only recently, given that their <span class="math inline">\(d=2\)</span> analogs have been widely used since the 1980s. It seems that the flow of ideas between the subfields of <span class="math inline">\(d=2\)</span> CFT and <span class="math inline">\(d\neq 2\)</span> CFT has not been optimal. It is probably the practitioners of <span class="math inline">\(d=2\)</span> CFT who are to be blamed for not explaining their techniques well enough, and not working out the generalization to <span class="math inline">\(d\neq 2\)</span> themselves.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-21838722898245866492017-09-09T12:36:00.000-07:002017-09-29T11:14:44.063-07:00Self-publishing a book with GlasstreeThree years and three major revisions after it first appeared on Arxiv and GitHub (why GitHub? see <a href="http://researchpracticesandtools.blogspot.fr/2014/02/the-case-for-emancipating-articles-from.html">this blog post</a>), my <a href="https://arxiv.org/abs/1406.4290">review article</a> on two-dimensional conformal field theory may be mature enough for appearing in book form. But with which publisher?<br />To answer this question, I should first say why I would want to have a book in the first place, since the text is already on Arxiv. <br /><a name='more'></a>My main motivation is simply to have it in a nicely bound form. At 127 A4 pages, the text is indeed too long to be cleanly stapled. Other motivations for publishing books, that play little or no role in my case, are:<br /><ul><li>To make money. But should publicly paid researchers make money from their professional writings?</li><li>To get constructive feedback from publisher-mandated reviewers. This could alternatively be obtained by publishing the text as a review article. But after three years on Arxiv I have already received a fair amount of spontaneous feedback.</li><li>To get the text formatted nicely. But nowadays publishers tend to do less and less work on this front, and to ask more and more work from the authors. And this tedious work is arguably of little use. Moreover, when publishers do perform some work, they often introduce errors.</li><li>To get the text advertised by the publisher, and bought by libraries. But is this still really relevant nowadays?</li></ul>Therefore, what I need is more a printer than a publisher: someone who interferes neither with my text and formatting, nor with my copyright arrangements (public domain in my case), and who prints a good quality book as cheaply as possible.<br /><br /><b>Glasstree online academic self-publishing</b> is the name of the publisher I found after some research and an unfortunate trial with the Éditions Universitaires Européennes. The main advantage is the price of 6.95 dollars for my 127 colored A4 pages. (This would have been even cheaper in black and white.) At such a price, shipping costs not much less than the book itself. I had the option of adding royalties to this price, which I did not do.<br />So the book can now be <a href="https://glasstree.com/shop/catalog/conformal-field-theory-on-the-plane_819/">bought on Glasstree</a> for 6.95 dollars. The quality of the copy that I have received is good: it is thinner than a printout on standard paper, it does not look like it will fall apart any time soon, and the colors are beautiful.<br />Glasstree’s website is OK but some improvements are possible: I would have liked to be able to add comments, including a link to Arxiv. A Paypal account is needed even if I do not include royalties in my price. And it is not clear what happens if and when I want to replace the book with an improved version, in particular I am not sure that the book’s web address on Glasstree will stay the same.<br />To conclude, it is much easier to publish with Glasstree than with traditional book publishers, and the prices of the resulting books can be much, much lower.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-57347569070316873792017-09-08T05:43:00.000-07:002017-12-31T01:50:57.067-08:00Differential equations from fusion rules in 2d CFTIn two-dimensional conformal field theory, correlation functions are partly (and sometimes completely) determined by the properties of the fields under symmetry transformations. In particular, correlation functions of primary fields are relatively simple, because by definition primary fields are killed by the annihilation modes of the symmetry algebra. On top of that, there exist degenerate primary fields that are killed not only by the annihilation modes, but also by some combinations of creation modes. As a result, correlation functions that involve degenerate primary fields sometimes obey nontrivial differential equations, for example BPZ equations. Usually, these equations are deduced from the relevant combinations of creation modes, called null vectors.<br />Determining null vectors in representations of a symmetry algebra is often complicated, as the algebraic structures of the relevant algebras and of their representations can themselves be complicated. Even in the case of the Virasoro algebra, it is not easy to explicitly determine null vectors. It is however much easier to determine which representations do have null vectors, using the fusion product. For example, if we know degenerate representations <span class="math inline">\(R_{(1,1)}\)</span> and <span class="math inline">\(R_{(2,1)}\)</span> with null vectors at levels <span class="math inline">\(1\)</span> and <span class="math inline">\(2\)</span> respectively, we can deduce that the fusion product <span class="math inline">\(R_{(2,1)}\times R_{(2,1)}\)</span> is degenerate and contains <span class="math inline">\(R_{(1,1)}\)</span>. The remainder of <span class="math inline">\(R_{(2,1)}\times R_{(2,1)}\)</span> must therefore be a degenerate representation, which can be identified as <span class="math inline">\(R_{(3,1)}\)</span>, and has a null vector at level <span class="math inline">\(3\)</span>. (See Section 2.3.1 of my <a href="https://arxiv.org/abs/1406.4290">review article</a> for more details.)<br />An important idea is therefore that it is not the structures of the algebras and representations that matter, but rather the structure of the category of representations, in other words their fusion products. This idea has in particular been developed in the works of <a href="https://arxiv.org/abs/math/0602079">Fuchs, Runkel and Schweigert</a>. But how does this help us compute correlation functions, and determine the differential equations that they obey? In other words, can we determine differential equations from fusion products, without computing null vectors?<br /><a name='more'></a><br /><br /><h4 id="fusion-rules-and-characteristic-exponents">Fusion rules and characteristic exponents</h4>A proposed answer can be found in a recent <a href="https://arxiv.org/abs/1708.06772">article by Mukhi and Muralidhara</a>. The basic idea is that fusion rules determine characteristic exponents of differential equations, and that this is sometimes enough for deducing the equations themselves. For example, the fusion rule <span class="math display">\[R_{(2,1)}\times R_{(2,1)} = R_{(1,1)} + R_{(3,1)}\]</span> implies that any correlation function that involves two copies of the corresponding primary field <span class="math inline">\(V_{(2,1)}\)</span>, behaves as <span class="math display">\[\Big< V_{(2,1)}(z_1)V_{(2,1)}(z_2) \cdots \Big> \underset{z_1\to z_2}{=} a (z_1-z_2)^{h_{(1,1)}-2h_{(2,1)}}\Big[1+O(z_1-z_2)\Big] + a' (z_1-z_2)^{h_{(3,1)}-2h_{(2,1)}}\Big[1+O(z_1-z_2)\Big]\]</span> for some coefficients <span class="math inline">\(a,a'\)</span>, where the conformal dimension associated to the representation <span class="math inline">\(R_{(r,s)}\)</span> is given in terms of the central charge <span class="math inline">\(c\)</span> of the Virasoro algebra by <span class="math display">\[h_{(r,s)} = \frac14\Big((b+b^{-1})^2 - (br + b^{-1}s)^2\Big)\ ,\quad \text{where} \quad c=1+6(b+b^{-1})^2\]</span> Actually, such a correlation function obeys a second-order differential equation in <span class="math inline">\(z_1\)</span>, whose characteristic exponents at <span class="math inline">\(z_1=z_2\)</span> are <span class="math inline">\( h_{(1,1)}-2h_{(2,1)}\)</span> and <span class="math inline">\( h_{(3,1)}-2h_{(2,1)} \)</span>.<br />So, consider a four-point function of degenerate primary fields <span class="math inline">\(\Big< V_1(z_1)V_2(z_2)V_3(z_3)V_4(z_4) \Big>\)</span>. Assume that fusion rules allow <span class="math inline">\(n\)</span> primary fields to appear in products of any two of these fields, so that our four-point function obeys a differential equation of order <span class="math inline">\(n\)</span>. We want to determine this equation from the knowledge of the characteristic exponents <span class="math inline">\(\lambda^{(z)}_i\)</span> for <span class="math inline">\(i\in \{1,\dots, n\}\)</span> and <span class="math inline">\(z\in\{z_2,z_3,z_4\}\)</span>. Writing this equation as <span class="math display">\[\sum_{k=0}^n (-1)^{n-k} W_k \partial_{z_1}^k f = 0\]</span> the coefficients <span class="math inline">\(W_k\)</span> are Wronskians of a basis of <span class="math inline">\(n\)</span> solutions, see eq. (2.3) of <a href="https://arxiv.org/abs/1708.06772">the article</a>. (Solutions are called conformal blocks.) Ratios of Wronskians <span class="math inline">\(\frac{W_k}{W_l}\)</span>, and their logarithmic derivatives <span class="math inline">\(\frac{\partial_{z_1}W_k}{W_k}\)</span>, have trivial monodromies around the singularities <span class="math inline">\(z_2,z_3,z_4\)</span>, and are therefore meromorphic functions of <span class="math inline">\(z_1\)</span>, with poles at <span class="math inline">\(z_2,z_3,z_4\)</span>. It is possible to compute their residues at these poles from the characteristic exponents, and in some cases to deduce the differential equation. (See the article for the details.)<br />As a simple test of these ideas, let us focus on relations between characteristic exponents. The function <span class="math inline">\(\frac{\partial_{z_1}W_n}{W_n}\)</span> is meromorphic, with simple poles at <span class="math inline">\(z_2,z_3,z_4\)</span>, and the residues <span class="math display">\[r(z_j) = -\frac{n(n-1)}{2} + \sum_{i=1}^n \lambda^{(z_j)}_i\]</span> Since the primary field <span class="math inline">\(V_1(z_1)\)</span> with dimension <span class="math inline">\(h_1\)</span> behaves at infinity as <span class="math inline">\(V_1(z_1)\underset{z_1\to \infty}{=} O(z_1^{-2h_1})\)</span>, we also have a pole at infinity with the residue <span class="math display">\[r(\infty) = -n(n-1) -2nh_1\]</span> (The term <span class="math inline">\(-\frac{n(n-1)}{2}\)</span> is the number of derivatives in the Wronskian <span class="math inline">\(W_n\)</span>. The residue at infinity involves twice this number, because at infinity all the solutions have the same characteristic exponent, so infinity should be considered a singularity with exponents <span class="math inline">\(-2h_1,-2h_1-1,\dots,-2h_1-(n-1)\)</span>.) The sum of the residues, where the residue at infinity counts with a minus sign, must vanish, <span class="math display">\[S = -r(\infty)+ \sum_{j=2}^4 r(z_j) =0\]</span> The resulting relation between characteristic exponents is <span class="math display">\[S= -\frac{n(n-1)}{2} + 2nh_1 + \sum_{i=1}^n\sum_{j=2}^4 \lambda^{(z_j)}_i =0\]</span><br /><h4 id="simple-examples"> </h4><h4 id="simple-examples">Simple examples</h4>The four-point function <span class="math inline">\(\Big<V_{(2,1)}(z_1)V_{(2,1)}(z_2)V_{(2,1)}(z_3)V_{(2,1)}(z_4)\Big>\)</span> obeys a BPZ equation of order <span class="math inline">\(n=2\)</span>. The characteristic exponents are the same at all singularities, <span class="math display">\[\lambda_1^{(z_j)} = h_{(1,1)}-2h_{(2,1)} \quad , \quad \lambda_2^{(z_j)} = h_{(3,1)}-2h_{(2,1)}\]</span> We therefore find <span class="math display">\[S = -1 + 3h_{(1,1)} -8 h_{(2,1)} + 3h_{(3,1)}\]</span> Since <span class="math inline">\(h_{(1,1)}=0, h_{(2,1)}=-\frac12 -\frac34 b^2\)</span> and <span class="math inline">\(h_{(3,1)}=-1-2b^2\)</span>, this vanishes as expected.<br />More generally, the four-point function <span class="math inline">\(\Big<V_{(n,1)}(z_1)V_{(n,1)}(z_2)V_{(n,1)}(z_3)V_{(n,1)}(z_4)\Big>\)</span> obeys an equation of order <span class="math inline">\(n\)</span>, whose characteristic exponents follow from the fusion rule <span class="math display">\[V_{(n,1)}\times V_{(n,1)} = V_{(1,1)} + V_{(3,1)} + \cdots + V_{(2n-1,1)}\]</span> In this case we have <span class="math display">\[S = -\frac{n(n-1)}{2} -4n h_{(n,1)} +3\sum_{i=1}^n h_{(2i-1,1)}\]</span> We compute <span class="math inline">\(h_{(n,1)} = -\frac12(n-1) -\frac14(n^2-1)b^2\)</span> and <span class="math inline">\(\sum_{i=1}^n h_{(2i-1,1)} = -\frac12 n(n-1) -\frac13 n(n^2-1)b^2\)</span>, and find <span class="math inline">\(S=0\)</span>.<br />Let us discuss an example based on the algebra <span class="math inline">\(W_3\)</span>. (See <a href="https://arxiv.org/abs/1007.1293">this article</a> for some background.) We consider degenerate representations of <span class="math inline">\(W_3\)</span> that correspond to finite-dimensional representations of the Lie algebra <span class="math inline">\(s\ell_3\)</span>, and have the fusion rules <span class="math display">\[3\times \bar 3 = 1 + 8 \quad , \quad 3\times 3 = \bar 3 + 6\]</span> Their conformal dimensions are <span class="math display">\[h_1 = 0 \quad , \quad h_3 = h_{\bar 3} = -1-\frac43 b^2 \quad , \quad h_6 = -2 - \frac{10}{3}b^2 \quad , \quad h_8 = -2-3b^2\]</span> The four-point function <span class="math inline">\(\Big< V_3(z_1)V_{\bar 3}(z_2) V_3(z_3) V_{\bar 3}(z_4)\Big>\)</span> obeys a differential equation of order <span class="math inline">\(n=2\)</span>, whose characteristic exponents obey <span class="math inline">\(\lambda_i^{(z_2)}=\lambda_i^{(z_4)}\neq \lambda_i^{(z_3)}\)</span>. We find <span class="math display">\[S = -1 - 8h_3 + 2h_1 + 2h_8 + h_{\bar 3} + h_6 = 0\]</span><br /><span class="math display"> </span> <br /><h4 id="the-problem-with-multiple-components">The problem with multiple components</h4>Consider a theory with an affine <span class="math inline">\(s\ell_2\)</span> Lie algebra at a level <span class="math inline">\(k\in\mathbb{C}\)</span>. An affine primary field <span class="math inline">\(\Phi^j\)</span> with spin <span class="math inline">\(j\)</span> has the dimension <span class="math display">\[h_j = \frac{j(j+1)}{k+2}\]</span> For <span class="math inline">\(j\in\frac12 \mathbb{N}\)</span> this field is degenerate, and the fusion rule of the corresponding representation is <span class="math display">\[R_j \times R_j = \sum_{i=0}^{2j} R_i\]</span> Therefore, the four-point function <span class="math inline">\(\Big< \Phi^j(z_1) \Phi^j(z_2)\Phi^j(z_3)\Phi^j(z_4)\Big>\)</span> obeys a differential equation of order <span class="math inline">\(n=2j+1\)</span>. If we naively computed characteristic exponents from conformal dimensions of primary fields as we did before, then the combination that we would expect to vanish would be <span class="math display">\[S_0 = - j(2j+1) -4(2j+1)h_j + 3\sum_{i=0}^{2j} h_i\]</span> Using <span class="math inline">\(\sum_{i=0}^{2j} h_i = \frac{4}{3(k+2)}j(j+1)(2j+1)\)</span>, we however find <span class="math display">\[S_0 = -j(2j+1) \neq 0\]</span> In the <a href="https://arxiv.org/abs/1708.06772">article by Mukhi and Muralidhara</a>, such discrepancies are attributed to primary fields having multiple components, in other words to irreducible representations containing several primary fields. And indeed, there are actually <span class="math inline">\(2j+1\)</span> primary fields of spin <span class="math inline">\(j\)</span>, labelled by a conserved number <span class="math inline">\(m=-j,-j+1,\dots,j\)</span> and denoted <span class="math inline">\(\Phi^j_{m}\)</span>. In the operator product expansion <span class="math inline">\(\Phi^\frac12_{\frac12}\Phi^\frac12_{\frac12}\)</span>, we can have only fields with <span class="math inline">\(m=1\)</span>. So the primary field <span class="math inline">\(\Phi^0_0\)</span> cannot appear, and we expect the representation <span class="math inline">\(R_0\)</span> to be represented by a level one affine descendent field instead. The corresponding exponent is therefore <span class="math inline">\(h_0 -2h_\frac12 + 1\)</span> instead of <span class="math inline">\(h_0-2h_\frac12\)</span>. In the case of the four-point function <span class="math inline">\(\Big< \Phi^\frac12_\frac12(z_1) \Phi^\frac12_{-\frac12}(z_2)\Phi^\frac12_\frac12(z_3)\Phi^\frac12_{-\frac12}(z_4)\Big>\)</span>, we therefore do find <span class="math inline">\(S=0\)</span> rather than the naive result <span class="math inline">\(S_0=-1\)</span>, which was computed with an incorrect exponent.<br />It would be nice to have a more conceptual understanding of the discrepancy, and a prediction of <span class="math inline">\(S\)</span> from features of the four representations, without having to invoke specific states in these representations.<br /><br /><h4 id="conclusion">Conclusion</h4>The Wronskian method is a promising method for deriving differential equations for four-point conformal blocks and correlation functions. It appears technically simpler than the direct method of first determining null vectors. As a bonus, the Wronskian method yields an ordinary differential equation, whereas the direct method yields a partial differential equation, which then has to be reduced to an ordinary differential equation using global conformal invariance. <br /><h4 id="multiply-degenerate-fields-and-unphysical-singularities"> </h4><h4 id="multiply-degenerate-fields-and-unphysical-singularities">Multiply degenerate fields, and unphysical singularities (added on September 12th)</h4>Let us return to the case of the Virasoro symmetry algebra. For rational values of the central charge, fields can have multiple null vectors.<br />For example, let us assume <span class="math inline">\(h_{(3,1)}=h_{(1,2)}\)</span>. This occurs in the Ising model (<span class="math inline">\(c=\frac12\)</span>) and also for <span class="math inline">\(c=28\)</span>. The corresponding doubly degenerate field <span class="math inline">\(V=V_{(3,1)}=V_{(1,2)}\)</span> should obey <span class="math display">\[V \times V = V_{(1,1)} + V_{(3,1)} + V_{(5,1)} = V_{(1,1)}+V_{(1,3)}\]</span> If <span class="math inline">\(c=28\)</span>, then <span class="math inline">\(h_{(5,1)}=h_{(1,3)}\)</span>. We conclude that <span class="math inline">\(V\times V\)</span> has two terms, with the coefficient of the term <span class="math inline">\(V_{(3,1)}\)</span> being zero. The relation between the characteristic exponents of the four-point function <span class="math inline">\(\Big<V_{(1,2)}(z_1)V_{(1,2)}(z_2)V_{(1,2)}(z_3)V_{(1,2)}(z_4)\Big>\)</span> that holds for generic <span class="math inline">\(c\)</span> also holds for <span class="math inline">\(c=28\)</span> by continuity. The relation for <span class="math inline">\(\Big<V_{(3,1)}(z_1)V_{(3,1)}(z_2)V_{(3,1)}(z_3)V_{(3,1)}(z_4)\Big>\)</span> also holds for <span class="math inline">\(c=28\)</span>, and these two relations are mutually compatible. (Notice that they do not involve the same order <span class="math inline">\(n\)</span> of the differential equation.)<br />In the case of the Ising model, the only term that is allowed by both null vectors is <span class="math display">\[V\times V = V_{(1,1)}\]</span> The characteristic exponents of the first-order differential equation for <span class="math inline">\(\Big<V (z_1)V(z_2)V(z_3)V(z_4)\Big>\)</span> are <span class="math inline">\(\lambda_1^{(z_j)} =\lambda_1^{(\infty)} = -1\)</span>, and their combination is <span class="math display">\[S = -2\]</span> The reason why <span class="math inline">\(S\neq 0\)</span> is now that the four-point function does not have singularities at <span class="math inline">\(z_1\in\{z_2,z_3,z_4,\infty\}\)</span> only: it also has two zeros. Its expression is indeed <span class="math display">\[\Big<V (z_1)V(z_2)V(z_3)V(z_4)\Big> \propto \frac{1}{(z_1-z_2)(z_3-z_4)} + \frac{1}{(z_1-z_3)(z_4-z_2)} + \frac{1}{(z_1-z_4)(z_2-z_3)}\]</span> This suggests that the singularities of the conformal blocks are not always limited to <span class="math inline">\(z_1\in\{z_2,z_3,z_4,\infty\}\)</span>. If there are extra singularities, the combination <span class="math inline">\(S\)</span> of the characteristic exponents at <span class="math inline">\(z_1\in\{z_2,z_3,z_4,\infty\}\)</span> does not have to vanish. (An extra singularity whose characteristic exponents are not all integer was found in the <a href="https://arxiv.org/abs/1109.6764">large <span class="math inline">\(c\)</span> limit of certain W-algebra conformal blocks</a>.)<br /><br /><b>Earlier reference (added on September 21st)</b><br /><br />The idea of deriving differential equations from fusion rules is also explained in Section 14 of <a href="https://arxiv.org/abs/hep-th/9702194">these 1997 lecture notes</a> by Jürgen Fuchs.<br /><br /><b>Later reference (added on December 31st)</b><br /><br />A more systematic treatment of the derivation of differential equations from fusion rules, based on the Katz theory of Fuchsian rigid systems, has recently appeared in <a href="https://arxiv.org/abs/1711.04361">this article by Belavin, Haraoka and Santachiara</a>. This article mainly deals with theories with W-algebra symmetries.<br /><br />The theory of Fuchsian rigid systems is powerful, but it does not know about fusion. In a CFT with Virasoro symmetry, \(R_{(3,1)}\) can be deduced from \(R_{(2,1)}\) by fusion. But the Fuchsian system for a four-point function with \(V_{(2,1)}\) is rigid, while the system for a four-point function with \(V_{(3,1)}\) is not. It would be nice to have some extra constraints that would allow us to determine the differential equation in the latter case.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-83286383065660119682017-02-21T14:02:00.000-08:002017-02-22T00:30:01.003-08:00Germany vs Elsevier: a puzzling maneuverIn the tense <a href="http://researchpracticesandtools.blogspot.fr/2017/02/germany-vs-elsevier-and-race-for-legal.html">negotiations between the German consortium DEAL and Elsevier</a>, there is a new twist: on February 13th, Elsevier <a href="https://www.timeshighereducation.com/news/elsevier-restores-journal-access-german-researchers">announced</a> that it was restoring the access of the affected German institutions to its journals.<br /><br />Elsevier’s two explanations for this maneuver fall short of being convincing. The first explanation, <a href="http://www.nature.com/news/german-scientists-regain-access-to-elsevier-journals-1.21482">given to Nature</a>, is that “it is customary [...] to retain access to content after a contracted period is concluded and as long as renewal discussions are ongoing”. Why then cut off access in January, and restore it in February? <br /><a name='more'></a>The second explanation, from Elsevier’s own announcement, is that Elsevier “supports German research and expects that an agreement can be reached”. But Elsevier’s usual way of supporting research is locking articles behind paywalls, and siphoning as much money as possible from research. And expecting an agreement to be reached soon is optimistic, given that <a href="http://www.sciencemag.org/news/2017/02/elsevier-journals-are-back-online-60-german-institutions-had-lost-access">negotiations are not scheduled to resume before March 23rd</a>.<br /><br />By unilaterally restoring access, Elsevier is relieving the pressure on DEAL to reach an agreement, and apparently ruining its traditional extortionate negotiating strategy. Such an admission of weakness also makes it less likely that the extortionate strategy will succeed with other consortiums in the future. The maneuver however makes sense, because DEAL has not been made more accommodating by losing access. Elsevier must be afraid that academics migrate to other means of procuring article, and realize that they fare well without a subscription.<br /><br />Still, how does the maneuver fit in Elsevier’s strategy? I have argued that the (mostly) Sci-Hub-induced threat of the demise of subscriptions leaves the publishing industry with two options:<br /><ol><li>performing a revenue-neutral switch from subscriptions to gold open access,</li><li>trying to block access to Sci-Hub.</li></ol>Restoring access would obviously be consistent with option #1. This would be the signal that Elsevier may be renouncing making people pay to read, in favour of making them pay to publish. In this case, Elsevier would be ready to accept DEAL’s demand that future articles from the consortium’s members be made openly accessible. This would however be dangerous for Elsevier: while journals hardly compete with each other for readers (who can seldom substitute an article for a different one), journals do compete for authors, and gold open access journals in particular compete on price. But Elsevier would <a href="https://svpow.com/2012/07/09/what-does-it-cost-to-publish-a-paper-with-elsevier/">need to charge 5-10 thousand dollars per article</a> for the switch to gold open access to be revenue-neutral, and would not be competitive with PLOS, let alone PeerJ. The challenge would then be to lock institutions into deals that would make such costs invisible to authors. For a small-scale example of such a deal, see the <a href="http://www.nature.com/news/science-journals-permit-open-access-publishing-for-gates-foundation-scholars-1.21486">recent agreement between the Gates Foundation and the publisher of Science</a>.<br /><br />Nevertheless, it is still possible that Elsevier is clinging to the subscription model, and pursuing option #2. Elsevier would then be preparing a politico-legal fight to have Sci-Hub, and other unofficial ways to get articles, blocked in Germany. In such a fight, Elsevier would not want to appear as an extortionist, or be accused of abusing its dominant position. This could explain the maneuver of restoring access, as a gesture not towards DEAL or academics, but towards the authorities that would decide the outcome of this fight. (A more brutal variant of option #2 would be to have higher authorities take over the negotiations and impose a deal on Elsevier's terms, as happened with the latest agreement between Elsevier and the French consortium Couperin.)<br /><br /><br />Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-50002648139583388022017-02-01T12:05:00.000-08:002017-02-01T12:05:51.261-08:00Germany vs Elsevier, and the race for legal open accessThe debate about green versus gold open access leaves aside a more fundamental difference: that between legal open access and pirate open access. This difference is essential because, <a href="http://bjoern.brembs.net/2016/02/sci-hub-as-necessary-effective-civil-disobedience/">as Bjorn Brembs put it</a>,<br /><blockquote>In terms of making the knowledge of the world available to the people who are the rightful owners, [pirate] Alexandra Elbakyan has single-handedly been more successful than all [legal] open access advocates and activists over the last 20 years combined.</blockquote>With Sci-Hub, pirate open access is so successful that one might wonder whether legal open access is still needed. The obvious argument that pirate open access is parasitic and therefore unsustainable, because someone has to pay for scientific journals, is easily disposed of: with up-to-date tools, journals could <a href="https://gowers.wordpress.com/2015/09/10/discrete-analysis-an-arxiv-overlay-journal/">cost orders of magnitude less than they currently do</a>, and be financed by modest institutional subsidies. A better reason why pirate open access is not enough is that it is subject to technical and legal challenges. This makes it potentially precarious, and <a href="https://blogs.ch.cam.ac.uk/pmr/2016/05/06/sci-hub-and-legal-aspects-of-contentmining/">unsuited to uses such as content mining</a>.<br /><a name='more'></a><br /><br />Universal open access would in principle lead to the elimination of subscriptions to journals, and the progress of open access could therefore be measured in terms of reductions in subscription costs. From this angle, open access has made little or no progress so far. However, pirate open access may make a crucial contribution to achieving legal open access, by making subscriptions to journals less vital to academics. Subscription costs could then decrease, and journals could find fewer subscribers, possibly to the point where the subscription model becomes untenable to publishers.<br /><br />This assumes that academic institutions take advantage of the existence of Sci-Hub in their negotiations with publishers. But academic institutions are used to accepting endless price increases, rather than risking losing access to journals. They did not all realize that their negotiating position is now much stronger: for example, the British consortium JISC recently reached a <a href="https://gowers.wordpress.com/2016/11/29/time-for-elsexit/">calamitous deal with Elsevier</a>.<br /><br />The German consortium DEAL, however, did adopt a firm negotiating stance, and let subscriptions run out at the end of 2016, rather than accepting an unsatisfactory proposal by Elsevier. And DEAL’s Ralf Schimmer is not afraid to <a href="https://www.timeshighereducation.com/news/deal-impasse-severs-elsevier-access-some-german-universities">publicly mention Sci-Hub, and to advocate the collapse of the subscriptions system</a>. As a result, many German researchers are now unable to legally access Elsevier journals, except through slow, unsystematic and/or inconvenient procedures. Academics who write in Elsevier journals should know that their German colleagues may find it difficult to read their work: this is one more reason <a href="http://www.thecostofknowledge.com/">not to write in Elsevier journals</a>.<br /><br />With a Finnish and a Taiwanese consortium in comparable situations, DEAL is not alone, but it is the largest and probably the boldest consortium to take on Elsevier. We may soon learn how long academics can survive without the subscriptions to which they are accustomed, and whether important concessions can now be extracted from publishers. DEAL has a sound strategy (including begin prepared for not reaching an agreement) and favourable environment (easy access to Sci-Hub): if it fails, nobody else is likely to succeed. If DEAL succeeds, it will find imitators, and the end of the subscriptions system could come as soon as the current subscription contracts expire. (These contracts are typically for five years.)<br /><br />Threatened with the demise of subscriptions, the publishing industry has two options:<br /><ol><li>performing a revenue-neutral switch from subscriptions to gold open access,</li><li>trying to block access to Sci-Hub.</li></ol>Option #1 is not popular with big publishers, who have rather been trying to earn money from gold open access on top of subscriptions. After all, from their point of view, they should earn more for giving access to everyone, than for giving access to subscribers only. Hints that option #1 is still not pursued seriously include the rejection by Elsevier of DEAL’s demand that articles with German authors be made open access, and the recent launch by Springer Nature of <a href="http://www.nature.com/news/announcement-five-new-nature-journals-for-2017-1.21265">five new subscription journals</a>. So publishers are most likely pursuing option #2. Their efforts can hardly be limited to the rather ineffective <a href="https://torrentfreak.com/elsevier-complaint-shuts-down-sci-hub-domain-name-160504/">lawsuit of Elsevier against Sci-Hub</a>, and there may soon be further attempts to make Sci-Hub less accessible in countries where subscriptions matter, including especially Germany.<br /><br />The future of open access may be determined by whether publishers manage to have Sci-Hub effectively blocked, before subscriptions irreversibly collapse. At the moment, publishers seem to think that they can win this slow-moving race. But at least, thanks to the DEAL consortium, the race has begun. Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-45073274844059655872017-01-13T14:19:00.000-08:002017-01-13T14:19:28.051-08:00Why don't academics write in Wikipedia?Since several years ago, Wikipedia is being widely used by academics. As a theoretical physicist, I often use it as a quick reference for mathematical terminology and results. Wikipedia is useful in spite of its many gaps and flaws: there was no general article on two-dimensional conformal field theory until I started <a href="https://en.wikipedia.org/wiki/Two-dimensional_conformal_field_theory">one</a> recently, the <a href="https://en.wikipedia.org/wiki/Minimal_models">article on minimal models</a> is itself minimal, and googling conformal blocks sends you to a <a href="http://physics.stackexchange.com/questions/1799/a-pedestrian-explanation-of-conformal-blocks">discussion on StackExchange</a>, since there is nothing on Wikipedia.<br /><br />The paradox is that many academics see these gaps and flaws in the coverage of their own favourite subjects, yet do nothing to correct them. Let me discuss three possible reasons for this passivity: fear of Wikipedia, lack of time, and laziness.<br /><br /><h4 id="the-jungle-outside-the-ivory-tower">The jungle outside the ivory tower</h4><h4 id="the-jungle-outside-the-ivory-tower"> </h4>Attracting and retaining academic contributors has long been recognized as a challenge by Wikipedians, to the extent that there are <a href="https://en.wikipedia.org/wiki/Wikipedia:Relationships_with_academic_editors#How_can_we_solve_it.3F">guidelines</a> on how to do it.<br /><br /><a name='more'></a><br />Academics may fear to collaborate with generalists on an equal footing, and to have their work changed or deleted by others. This fear is largely unfounded: Wikipedia has mature mechanisms for moderation and conflict resolution. And in most subjects, contributors are too few, rather than too many. Moreover, generalists can do plenty of useful work on style, structure, clarity, sourcing, etc. (They would even be able to help with <a href="http://researchpracticesandtools.blogspot.fr/2014/02/the-case-for-emancipating-articles-from.html#more">writing research articles themselves</a>, if given the opportunity.)<br /><br />Academics may also not be eager to comply with <a href="https://en.wikipedia.org/wiki/Wikipedia:Five_pillars">Wikipedia’s own guidelines</a>. In particular, Wikipedia strives to be much less technical than the academic literature. For example, the <a href="https://en.wikipedia.org/wiki/AdS/CFT_correspondence">article on the AdS/CFT correspondence</a> is a “featured article”, i.e. an article of top quality according to Wikipedians, and it contains only one equation. However, the guidelines are not enforced strictly, and Wikipedia’s “fifth pillar” is that there are no firm rules. In subjects that are underdeveloped today, Wikipedia will become what contributors will make of it.<br /><br />If Wikipedia’s functioning sounds suboptimal to them, academics should consider the <a href="http://www.cracked.com/article_22712_6-ways-modern-science-has-turned-into-giant-scam.html">deeply flawed system of scientific communication</a> that they experience in their work, and compare the academic literature with Wikipedia in terms of quality and reliability.<br /><br /><h4 id="no-time-for-altruism">No time for altruism</h4><h4 id="no-time-for-altruism"> </h4>The main activities that advance academic’s careers are doing research, writing articles, and looking for money. Writing in Wikipedia is not among them, and could be dismissed as a waste of time.<br /><br />This would overlook the fact that academics already practise career-neutral activities, such as popularisation and peer reviewing. It is to these activities that writing in Wikipedia should be compared. Wikipedia probably reaches more people than any other medium of science popularisation. And contributing to Wikipedia probably advances science more than writing reports for a handful of readers. Vast amounts of time are spent reviewing unworthy articles, out of a misplaced sense of responsibility towards a system that mostly fails at filtering and improving the literature. It would be more constructive to spend some of this time on Wikipedia.<br /><br /><h4 id="is-laziness-a-good-excuse">Is laziness a good excuse?</h4><h4 id="is-laziness-a-good-excuse"> </h4>Writing good and complete Wikipedia articles of course requires much effort. I considered myself ready to start an article on two-dimensional conformal field theory only after writing a review article and giving lectures on the subject, plus updating the article on the Virasoro algebra. However, many useful contributions can be done with little effort: correcting a mistake, making a sentence more precise, adding a reference, or updating an obsolete fact.<br /><br />Moreover, the collaborative nature of Wikipedia allows each contributor to do what is easier to him. This means not only writing on the subjects he knows best, but also focusing on the aspects of the work that he prefers: clarifying explanations, structuring articles or groups of connected articles, making figures, or choosing references. Everyone can, and should, contribute according to his tastes and abilities.<br /><br /><h4 id="conclusion">Conclusion</h4><h4 id="conclusion"> </h4>I have argued that neither fear of Wikipedia, nor lack of time, nor laziness are good reasons for academics not to contribute. The relative scarcity of academic contributors might be better explained by pure and simple inertia. The problem might solve itself when today’s academics are replaced by digital natives, but it would be a pity to wait until then.<br /><br />Wikipedia is not only a powerful tool of scientific communication, but also a world wonder whose existence shows that humanity is capable of collective wisdom. Contributing to Wikipedia should be part of the mission of academics.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-2333518313301411602016-10-31T06:03:00.000-07:002016-10-31T06:37:39.340-07:00Publishing in SciPost: a must?Now that I have published my first <a href="https://scipost.org/10.21468/SciPostPhys.1.1.009">article</a> in <a href="http://researchpracticesandtools.blogspot.fr/2016/07/scipost-right-tool-for-commenting-arxiv.html">SciPost</a>, let me comment on that experience.<br /><h4 id="open-peer-review"> </h4><h4 id="open-peer-review">Open peer review!</h4><h4 id="open-peer-review"> </h4>The main reason I was attracted to SciPost in the first place is that it practises open peer review, which means that the referee reports are <a href="https://scipost.org/submission/1607.07224v1/">publicly viewable</a>. (The referees can choose to remain anonymous.) If one wants to improve the communication of research results, publishing referee reports is the obvious first step, as it requires no extra work, and has potentially large benefits on the quality of the process. Actually, publishing reports on a rejected article can even save some work if the article is later submitted elsewhere. (SciPost however erases reports on rejected articles.)<br /><a name='more'></a><br />In closed peer review, the only indication of quality is the reputation of the journal. Using the reputation (or the impact factor) of a journal as a proxy for the quality of an article is however <a href="https://en.wikipedia.org/wiki/Stimulus-triggered_acquisition_of_pluripotency">stupid</a>. In contrast, open peer review guarantees the quality of the peer review process at the level of the article. Readers should prefer open peer reviewed articles. Therefore, authors should prefer journals that do open peer review.<br /><br />In the case of SciPost, the referees not only write reports, but also give some qualitative grades in six domains: validity, significance, originality, clarity, formatting, grammar. While this is in principle a <a href="http://researchpracticesandtools.blogspot.fr/2014/03/rating-scientific-articles-why-and-how.html">good idea</a>, I do not see why each referee should comment on each aspect on the article. For example, I could complain that the first referee wrote a narrowly technical report, and has no understanding of the significance and originality of the work. Fortunately, readers can see that for themselves.<br /><br />SciPost’s procedure is not for the faint-hearted: I and my coauthors were a bit shocked by the first referee report, which came without comments from the editor, and without indication whether there would be other reports. Fortunately there was a second report, and it was more insightful.<br /><h4 id="on-the-use-of-arxiv"> </h4><h4 id="on-the-use-of-arxiv">On the use of arXiv</h4><h4 id="on-the-use-of-arxiv"> </h4>According to the <a href="https://scipost.org/submissions/sub_and_ref_procedure">procedure</a>, articles are submitted to SciPost from arXiv. While this is good and normal in the case of the initial submission, this also applies to subsequent revisions. Posting each revision on arXiv sounds strange, because in principle there could be many revisions until the article is published. But arXiv is not designed to host intermediate steps in a peer review process. ArXiv is designed to communicate articles to readers, and having too many versions can confuse those readers. Actually, arXiv has technical features that make it awkward to have many versions: there is a delay of about one day between the submission and the appearance of each version, and only the first four revisions of an article are announced in the daily “Replacemements” list.<br /><br />While SciPost requires the posting on arXiv of intermediate versions, but not of the published version, the publishing platform <a href="http://episciences.org/page/article-submission">Episciences</a> does just the opposite. However, authors who are not fully happy with the published version, should not have to inflict it to arXiv readers. Journals should respect arXiv’s role in scientific communication, and avoid treating it as a technical platform.<br /><h4 id="formatting"> </h4><h4 id="formatting">Formatting</h4><h4 id="formatting"> </h4>SciPost has its own format for articles, and takes great care of references. In practice, I found that the amount of required formatting work was reasonable. In principle however, I do not see why journals should apply their own cosmetics to the articles they publish. Moreover, having technically perfect references is useful <a href="http://researchpracticesandtools.blogspot.fr/search?q=write+for+humans">not to readers, but to bibliometric robots</a>, whose work needs maybe not be encouraged. So far, SciPost has missed the opportunity to free itself from needless formatting rules.<br /><h4 id="the-platform-possible-improvements"> </h4><h4 id="the-platform-possible-improvements">The platform: possible improvements</h4><h4 id="the-platform-possible-improvements"> </h4>The platform for submitting and tracking articles works well, but could be improved on a number of points:<br /><ol><li>It would be good to know in advance what the submission form looks like, without having to actually fill it. In particular, SciPost’s subject classes differ from arXiv’s, and should be made explicit somewhere.</li><li>I would greatly appreciate if all the data that I enter into the platform were accessible to me from my account. This would include submission forms, comments, and correspondence with editors.</li><li>It would be good to be able to preview texts such as comments on articles or replies to referee reports. Previewing before sending is all the more relevant, as these texts can contain Latex formulas. Being able to save them and send them later would also be good.</li></ol>These technical features might not be easy to code from scratch, but they are standard, and there should be a way to implement them without having to reinvent the wheel. More generally, SciPost could well take inspiration, if not code, from journals such as PeerJ, JHEP, or Discrete Analysis.<br /><h4 id="conclusion"> </h4><h4 id="conclusion">Conclusion</h4><h4 id="conclusion"> </h4>Open peer review is such an important feature, that it outweighs the inconveniences of having to misuse arXiv, and of dealing with an imperfect platform. Nevertheless, improvements on these fronts would greatly increase the attractiveness of SciPost, and its chances of success.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com2tag:blogger.com,1999:blog-9119793002820072645.post-69298287756230239602016-10-26T13:16:00.000-07:002016-10-26T13:25:56.685-07:00Physical Review Letters: physics' luxury journalHave you ever wondered why this apparently interesting new paper on arXiv was only four or five pages long? Why it had this unreadable format with two columns in fine print, with formulas that sometimes straddle both columns, and with these cramped figures? Why the technical details were relegated to appendices or future work, if not omitted altogether? And why so much of the already meager text was devoted to boastful hot air?<br /><br />Most physics researchers do not wonder for long, and immediately recognize a paper that is destined to be submitted to Physical Review Letters. That journal’s format is easy to recognize, as it has barely changed since 50 years ago – a time when page limits had the rationale of saving ink and paper. That rationale having now evaporated, the awful format has nevertheless survived as a signal of prestige. Because, you see, Physical Review Letters is supposed to be physics’ top journal, which means that publishing there is supposed to be good for one’s career.<br /><a name='more'></a><br /><br />So Physical Review Letters is to physics what Nature, Science and Cell are to biology: a career-defining journal whose prestige is based on entrenched prejudice, rather than on the strength of its contribution to academic communication. The biologist Randy Schekman has <a href="https://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science">called Nature, Science and Cell “luxury journals”</a>, and vowed to boycott them because they are damaging science by rewarding the flashiest science, not the best.<br /><br />For the same reasons, physicists could well consider boycotting Physical Review Letters. This would first mean not publishing in that journal or doing editorial work for it, as in the case of the <a href="http://www.thecostofknowledge.com/">“Cost of knowledge” Elsevier boycott</a>. In the case of a luxury journal, this could even mean not reading or citing it. After all, if an article is so brief and unreadable that a longer version is needed for filling out the details, why not read and cite that longer version?<br /><br />Of course, in the prevailing academic environment, boycotting luxury journals is easier said than done, especially for researchers who, unlike Randy Schekman, did not yet get their Nobel prize. Even when shown that it is possible to <a href="https://chorasimilarity.wordpress.com/2014/03/05/the-price-of-publishing-with-arxiv/">survive as a scientist</a> or even <a href="https://en.wikipedia.org/wiki/Grigori_Perelman">earn a Fields medal</a> on the strengths of arXiv preprints, researchers will rightly be wary of their peers and administrators, who too often measure their contributions to science by counting articles in luxury journals. Then it is possible to adopt milder forms of mitigating the downsides of Physical Review Letters: for instance, writing an <a href="https://arxiv.org/abs/1412.5123">arXiv preprint</a> in one’s preferred format, and then rewriting it for submission to Physical Review Letters, while not considering the <a href="http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.131603">published version</a> as much more than an administrative form. (In this example, notice the differences in lengths, titles, and dates of last modification, between the arXiv and published versions.)<br /><br />In the age of arXiv, the purpose of scientific journals is rapidly moving from scientific communication, to career management. Journals who want to retain some scientific relevance had better show it by taking steps such as eliminating absurd formatting constraints, and adopting some form of open peer review. Otherwise, they may well degenerate into mere purveyors of bureaucratic formalities / parasitic enterprises that suck time and money out of research / engines of the speculative bubble of bibliometrics. (Please choose your favourite invective.)Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com1tag:blogger.com,1999:blog-9119793002820072645.post-58347186285825125282016-10-21T15:32:00.000-07:002016-10-25T02:03:53.872-07:00Finite operator product expansions in two-dimensional CFTWhile the conformal bootstrap method has recently enjoyed the wide popularity that it deserves, its applications have been mostly restricted to unitary conformal field theories. (By definition, in a unitary theory, there is a positive definite scalar product on the space of states, such that the dilatation operator is self-adjoint.) Unitarity brings the technical advantage that <a href="http://researchpracticesandtools.blogspot.fr/2014/10/unitarity-and-reality-of-three-point.html">three-point structure constants are real</a>, so squared structure constants are positive, leading to bounds on allowed conformal dimensions. However, dealing with non-unitary theories using similar methods is surely possible, at the expense of having the signs of squared structure constants as extra discrete variables. And unitarity is sometimes assumed even in cases where it brings no discernible technical benefit, such as in studies of <a href="https://arxiv.org/abs/1608.06241">torus partition functions</a>, where multiplicities are positive integers whether the theory is unitary or not.<br /><br />So it is refreshing that, in their <a href="https://arxiv.org/abs/1606.07458">recent article</a>, Esterlis, Fitzpatrick and Ramirez apply the conformal bootstrap method to non-unitary theories. <br /><a name='more'></a>Their main ideas are to look for OPEs that involve only finitely many primary fields, and to focus on two-dimensional theories with local conformal symmetry (i.e. a Virasoro symmetry algebra, rather than just <span class="math inline">\(s\ell_2\)</span>). The focus on two-dimensional theories allow them to interpret their results in terms of minimal models and/or Coulomb gas integrals. In other words, they validate their numerical bootstrap results by comparing them with the analytic bootstrap results that have been accumulating since the 1984 article by Belavin, Polyakov and Zamolodchikov. The hope is of course that their numerical bootstrap method might be useful in cases where no analytic results are available.<br /><br /><h4 id="the-methods">The methods</h4><h4 id="the-methods"> </h4>The OPE <span class="math inline">\(\phi\times \phi\)</span> is constrained by numerically requiring crossing symmetry of the four-point function <span class="math inline">\(\left<\phi\phi\phi\phi\right>\)</span>. In the Gliozzi bootstrap method, this implies that a certain matrix has a vanishing kernel. But Esterlis, Fitzpatrick and Ramirez convincingly argue that it is better in practice, although equivalent in principle, to require that matrix to have a vanishing singular value. This is why they have nice plots of how singular values depend on <span class="math inline">\(\Delta_\phi\)</span>.<br /><br />They follow the good practice of including some of their code in the arXiv files, and this code has already been used in <a href="https://arxiv.org/abs/1609.02165">another article</a>. Of course, it would be even better to include all the necessary code, and to avoid using a closed software such as Mathematica.<br /><br />There is also a brave attempt at analytically solving crossing symmetry using explicit formulas for the fusing matrix. Since this is limited to cases that involve degenerate fields, this approach is not yet very convincing, given that the results can more easily be deduced from the fusion rules, as I will argue below.<br /><br /><h4 id="the-results">The results</h4><h4 id="the-results"> </h4>Let me summarize the results of Esterlis, Fitzpatrick and Ramirez. The idea is to look for finite OPEs of the types <span class="math display">\[\phi \times \phi = 1 + \phi \quad , \quad \phi\times \phi = 1 \quad , \quad \phi\times \phi = 1 + \epsilon \quad , \quad \phi \times \phi = \epsilon\ ,\]</span> where <span class="math inline">\(\phi,\epsilon\)</span> are spinless Virasoro primary fields of arbitrary dimensions <span class="math inline">\(\Delta_\phi,\Delta_\epsilon\)</span>, and <span class="math inline">\(1\)</span> is a field of dimension zero. In order to write the results, it is convenient to introduce the dimensions <span class="math inline">\(\Delta_{r,s}=2h_{r,s}\)</span> with <span class="math display">\[h_{r, s} = \frac14\left[ (\beta - \beta^{-1})^2 - (\beta r - \beta^{-1}s)^2\right] \quad \text{with} \quad c = 1 - 6(\beta -\beta^{-1})^2\ .\]</span> The corresponding primary fields <span class="math inline">\(\phi_{r,s}\)</span> are degenerate if <span class="math inline">\(r,s\in\{1,2,3,\dots\}\)</span>. Here <span class="math inline">\(c\)</span> is the central charge, and minimal models correspond to the values <span class="math display">\[c_{p, p'} = 1-6\frac{(p-p')^2}{pp'} \quad \text{with} \quad p,p'\in \{2,3,4,\dots\} \ , \quad \text{i.e.} \quad \beta^{2} =\frac{p}{p'}\ .\]</span> Up to the symmetry <span class="math inline">\(\phi_{r,s} = \phi_{p'-r,p-s}\)</span>, the only solutions are found to be <span class="math display">\[\renewcommand{\arraystretch}{1.3} \begin{array}{l|l|l} \text{Case} & \text{Conditions on } \phi,\epsilon & \text{Condition on } c \\ \hline \phi \times \phi = 1 + \phi & \phi = \phi_{1,3} & c = c_{5,p'} \\ \phi \times \phi = 1 & \phi = \phi_{1, 1} & \\ \phi \times \phi = 1 & \phi = \phi_{1, p-1} & c = c_{p, p'} \\ \phi \times \phi = 1 + \epsilon & \phi = \phi_{1, 2} ,\ \epsilon = \phi_{1,3} & \\ \phi \times \phi = 1 + \epsilon & \phi = \phi_{1, p-2} ,\ \epsilon = \phi_{1,3} & c = c_{p, p'} \\ \phi \times \phi = \epsilon & \phi = \phi_{\frac12, \frac12},\ \epsilon = \phi_{0,0} & \end{array}\]</span> In the first five series of solutions, all fields are degenerate, and the solutions could likely be deduced from the degenerate field fusion rules, <span class="math display">\[\phi_{r_1,s_1} \times \phi_{r_2,s_2} = \sum_{r=|r_1-r_2|+1}^{r_1+r_2} \sum_{s=|s_1-s_2|+1}^{s_1+s_2} \phi_{r,s}\ ,\]</span> where the sums run in increments of <span class="math inline">\(2\)</span>, and if <span class="math inline">\(c=c_{p,p'}\)</span> the symmetry <span class="math inline">\(\phi_{r,s} = \phi_{p'-r,p-s}\)</span> should be taken into account, resulting in fewer fields in fusion products. For example, in the case <span class="math inline">\(\phi \times \phi = 1 + \phi\)</span>, it is natural to try <span class="math inline">\(\phi = \phi_{1,3}\)</span>, whose fusion product with itself is in general <span class="math inline">\(\phi_{1,3}\times \phi_{1, 3} = 1 + \phi_{1, 3} + \phi_{1, 5}\)</span>. We then need an extra null vector that would kill the field <span class="math inline">\(\phi_{1,5}\)</span>. The existence of an extra null vector implies <span class="math inline">\(c = c_{p,p'}\)</span>. Then, for the extra null vector of <span class="math inline">\(\phi_{1, 3} = \phi_{p'-1,p-3}\)</span> to kill <span class="math inline">\(\phi_{1, 5}\)</span> but not <span class="math inline">\(\phi_{1,3}\)</span>, we need <span class="math inline">\(4 < p<6\)</span> i.e. <span class="math inline">\(p=5\)</span>.<br /><br />It is likely that, with similar arguments, we can derive the first five series of solutions under the assumption that all fields are degenerate. But the case <span class="math inline">\(\phi \times \phi = \epsilon\)</span> is different, and does not include any degenerate fields. As the authors point out, it can however be explained by a Coulomb gas construction of the corresponding four-point function. I would add that the presence of fields with half-integer indices reminds me of the <a href="https://arxiv.org/abs/1607.07224">recently proposed OPE</a> <span class="math inline">\(\phi_{0,\frac12}\times \phi_{0,\frac12} = \sum_{m,n\in\mathbb{Z}} \phi_{2m,n+\frac12}\)</span> in the Potts model (where however fields are non-diagonal unless $m=0$).<br /><br />It is very possible that exploring further cases would yield genuinely new results, that could well be inaccessible to other known methods. An obvious next case would be <span class="math inline">\(\phi\times \phi = \epsilon +\epsilon'\)</span>. The presence of non-diagonal fields could also be allowed. The existence of consistent CFTs with finite OPEs but no degenerate fields would obviously be very interesting.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-88047543833340470392016-09-27T14:54:00.000-07:002016-10-04T07:41:05.780-07:00Abonnements aux revues scientifiques: les chiffres du CEAEn page 10 de son rapport d’activité 2015, le Service de Valorisation de l’Information du CEA publie les coûts des abonnements aux revues électroniques pour les années 2014, 2015 et 2016, avec pour 2015 l’évaluation du coût par article téléchargé. Je voudrais ici diffuser et commenter ces chiffres.<br /><a name='more'></a><br />(Chiffres manquants.)<br /><br />D’abord, il est remarquable que le CEA rende ces chiffres publics. Pour obtenir des <a href="https://gowers.wordpress.com/2014/04/24/elsevier-journals-some-facts/">chiffres comparables pour les universités anglaises</a>, dans le cas du seul éditeur Elsevier, Tim Gowers a dû leur envoyer des requêtes en bonne et due forme, et fournir un travail substantiel. Bien sûr, il est en principe naturel de divulguer comment on dépense l’argent public, mais ce n’est pas toujours fait.<br /><br />(Paragraphe manquant.) <br /><br />Enfin, avec l’utilisation de plus en plus répandue du site pirate <a href="https://en.wikipedia.org/wiki/Sci-Hub">Sci-Hub</a>, les suspensions et annulations d’abonnements pourraient devenir de moins en moins douloureuses pour les chercheurs, et les instituts où ils travaillent devraient bénéficier d’un meilleur rapport de forces dans leurs négociations avec les éditeurs. Les coûts des abonnements devraient donc diminuer, sauf si les éditeurs parvenaient à entraver l’utilisation de Sci-Hub. Pour y parvenir, les éditeurs devraient convaincre les gouvernements et/ou les instituts de recherche d’agir contre leurs propres intérêts en bloquant autant que possible l’accès à Sci-Hub. Compte tenu de leur succès dans les négociations passées, les éditeurs n’en semblent pas a priori incapables.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com1tag:blogger.com,1999:blog-9119793002820072645.post-68211001931984924862016-07-24T08:03:00.000-07:002016-07-24T08:06:29.399-07:00SciPost: the right tool for commenting arXiv articles?ArXiv has not changed much since it started in 1991, and it is only <a href="https://confluence.cornell.edu/display/culpublic/arXiv+User+Survey+Report">starting to consider</a> the obvious next steps: allowing comments on articles, followed by full-fledged open peer review. Scientists have not all been waiting idly for the sloth to make its move, and a few have <a href="https://johncarlosbaez.wordpress.com/2013/06/14/the-selected-papers-network-part-2/">tried</a> to build systems for doing that. Here I will discuss a recent attempt, called <a href="https://scipost.org/">SciPost</a>.<br /><h4 id="a-strong-editorial-college"> </h4><h4 id="a-strong-editorial-college"> A strong editorial college</h4>The most distinctive feature of SciPost is its <a href="https://scipost.org/about#editorial_college_physics">editorial college</a>, made of well-known theoretical physicists. These people do not just lend their names to the project. Given how SciPost functions, they have a lot of work:<br /><a name='more'></a><ul><li>Approve registration of new contributors.</li><li>Approve requests for new Commentary pages. Such requests need to be sent by contributors in order to be able to comment on arXiv or journal articles.</li><li>Approve comments themselves.</li><li>Oversee the refereeing of submitted articles, according to a <a href="https://scipost.org/submissions/sub_and_ref_procedure">quite complicated procedure</a>.</li></ul>The composition and degree of involvement of the editorial college make it clear that the emphasis is on the quality of contributions. Commenting platforms with weak quality control can easily degenerate into discussion forums. (Cf the late Selected Papers Network.) SciPost seems designed to avoid this.<br /><h4 id="an-adequate-platform"> </h4><h4 id="an-adequate-platform">An adequate platform</h4>Having tested the website by looking around and writing a <a href="https://scipost.org/commentary/arXiv:1503.02067v2/#comment_id">small comment</a>, I have found that it works well. My registration, request for a Commentary page, and comment, were approved in less than 24 hours, sometimes much less. Latex formulas are allowed. The search function works as expected.<br />While this is a good start, I hope that the platform’s capabilities will improve in due time:<br /><ul><li>A potentially crucial point would be to have Commentaries linked from arXiv using trackbacks.</li><li>Contributors who comment on an article could have the option to inform the authors by an automatically written email. And readers could have the option to follow a number of articles and contributors.</li><li>Different versions of the same article are apparently treated as unrelated objects: it would be nicer to have one Commentary for all versions, with the possibility to indicate which comments apply to which version(s).</li><li>More generally, SciPost could do less categorizing, and more tagging. There need not be essential distinctions between different versions of the same article, between Reports, Replies and Comments on an article, between authors and other contributors. These distinctions could be indicated by tags, rather than woven into the structure of the platform.</li></ul>If the platform became more flexible, it could be used in interesting ways that have not necessarily been foreseen by its creators. For example, if anonymous Comments were allowed, then I could post an anonymous report that I wrote for some other journal.<br /><h4 id="a-new-flavour-of-open-peer-review"> </h4><h4 id="a-new-flavour-of-open-peer-review">A new flavour of open peer review</h4>SciPost’s <a href="https://scipost.org/submissions/sub_and_ref_procedure">organization of open peer review</a> is disctinctive for<br /><ul><li>the strong involvement of editors,</li><li>the principle that reports are publicly viewable, while reviewers can remain anonymous,</li><li>the deletion of reports when articles are rejected.</li></ul>Deleting information that has been publicly available does not look like a good idea. In contrast, PeerJ publishes reports only when articles are accepted. But it would be better to always keep reports publicly available: reports of a rejected article are particularly valuable, as they can prevent unnecessary duplication of effort if the article is submitted elsewhere.<br />Ultimately, one could even eliminate the notions of accepted and rejected articles. While not being that radical, SciPost takes steps in this direction by having reviewers <a href="http://researchpracticesandtools.blogspot.fr/2014/03/rating-scientific-articles-why-and-how.html">rate articles</a> in a more nuanced way than just accepting or rejecting. Reviewers are supposed to rate the validity, significance, originality and clarity of articles, with six possible grades (from “poor” to “top”) in each case. (The originality rating might be a bit subjective.) It would be good if such ratings were available to all commentators, and not just reviewers.<br /><h4 id="the-problem-of-attracting-contributors"> </h4><h4 id="the-problem-of-attracting-contributors">The problem of attracting contributors</h4>The main difficulty for any initiative like SciPost is to attract enough contributors, and to become a standard tool for a large enough community. With its editorial college, focus on quality, and intitutional backing, SciPost has a chance of attracting quality-conscious contributors.<br />However this comes at the expense of ease of use. New contributors wanting to post a comment should not have to wait for three successive approvals from moderators (for registering, opening a Commentary page, and posting the comment itself). Wikipedia and StackExchange have shown that allowing new contributors to bring content online immediately is not necessarily detrimental to quality control. In the case of SciPost, weak contributions need not be blocked: what matters is that strong contributions are recognized, displayed more prominently, and easy to find.<br /><h4 id="conclusion"> </h4><h4 id="conclusion">Conclusion</h4>SciPost is probably the best currently active platform for publicly commenting arXiv articles, at least in theoretical physics. I hope that it gains the popularity that it deserves, and that its developers build on their good start and make it more flexible and easy to use.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com1tag:blogger.com,1999:blog-9119793002820072645.post-88586073632599432732016-05-28T07:40:00.002-07:002016-05-28T07:52:23.105-07:00Was it such a good idea to put this review article in the public domain?Two years ago, when posting a review article on Arxiv, I did the experiment of putting it in the public domain. The idea was to allow anyone to distribute and even to modify it, in the hope of increasing the circulation and usefulness of the article, as I explained in <a href="http://researchpracticesandtools.blogspot.fr/2014/02/the-case-for-emancipating-articles-from.html">this blog post</a>.<br />Putting the text in the public domain also has potential drawbacks:<br /><ul><li>losing revenue,</li><li>losing control.</li></ul>The potential loss of revenue is not a problem for me, as I am already employed and paid to do research by the French research agency CNRS. In fact I am not sure whether scientists should earn money from their professional writings or patents. Anyway, in the case of such a specialized text, the potential revenue would be small.<br />The potential loss of control is a priori more worrisome. Could my reputation be damaged if someone did something bad with my text? In order to find out, I had to wait until people actually did something with my text.<br /><h4 id="enters-amazon."> </h4><h4 id="enters-amazon.">Enters Amazon</h4>My review article is now available <a href="https://www.amazon.com/Conformal-Field-Theory-Sylvain-Ribault-ebook/dp/B01FEQ2LAG/ref=sr_1_1?s=digital-text&ie=UTF8&qid=1464034205&sr=1-1">for sale on Amazon</a>, in the Kindle format, at the price of about $9. I had nothing to do with that edition, I guess it was done by an Amazon robot.<br /><a name='more'></a><br />The problem is not that Amazon could make money from an otherwise <a href="https://arxiv.org/abs/1406.4290">freely available text</a>. After all, having it on Amazon might be useful to people who would not think of looking it up on Arxiv. The problem is the formatting: judging from the preview, the Kindle formatting is disastrous, and in particular the formulas have become almost unreadable.<br /><h4 id="menace-or-opportunity"> </h4><h4 id="menace-or-opportunity">Menace or opportunity?</h4>So is this Kindle edition a danger to my academic reputation? A lost opportunity? Or free advertisement for my review article? I hope it is mostly the latter, and in order to improve the advertising value of the Amazon edition, I have posted a one-star review that directs readers to the Arxiv version.<br />Still, I am disappointed that there is not yet a clean printed version of the text for sale – I might have bought it myself. But the experiment is not over, and I am still hoping that more people will “steal” my text.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-29055527748697218502016-04-18T14:18:00.000-07:002016-04-18T14:18:43.017-07:00Building a website for physics courses with DrupalI have been involved in building <a href="https://courses.ipht.cnrs.fr/?q=en/welcome">a new website</a> for the theoretical physics courses at <a href="http://ipht.cea.fr/en">IPhT</a>, using the content management framework <a href="https://en.wikipedia.org/wiki/Drupal">Drupal</a>. This post is the story of this experience, written for researchers who are considering embarking in similar projects.<br /><br />Riccardo Guida and I have been organizing the IPhT courses for years (<i>many</i> years in Riccardo's case), and one year ago we finally decided to escape the IPhT website and set up a dedicated website for the courses. The problem with the IPhT website was that it did not know what a course was. A course was a collection of various objects: a number of "seminars", a "publication" where lecture notes could be stored, a few lines in a list of courses on a static webpage, etc. These objects did not talk to one another, and the same information had to be copy-pasted several times.<br /><a name='more'></a><br /><br />What could we do about this? An option would have been to modify the IPhT website. But that website is based on legacy homegrown software, which people are reluctant to modify. So better let it peacefully die of old age, and start something new.<br /><br /><h2>Choosing Drupal </h2><br />From the beginning, the idea was to build a new website ourselves, in order to have full control over it. The bet was that the necessary tools would be easy enough to learn and to use. Of course, the task ended up taking much more time than we expected: of the order of several person-months. Still, it was an altogether pleasant experience, as we worked well as a team, and were free to make (almost) all the relevant decisions. Moreover, we had good and reactive computer support, who managed the server, built a virtual machine, and installed the necessary software.<br /><br />Having a relatively precise idea of the website we wanted to build was crucial for choosing the right tools. We were looking for a Content Management System (CMS), that would allow us to build the website without actually writing code, and without reinventing common features such as menus, boxes, forms, comments, etc. Of course we wanted to use standard software, in order to be able to easily find information, tutorials, and competent persons. The standard tools that we considered were Wordpress, Joomla and Drupal:<br /><ul><li>Wordpress is the most widely used tool for building websites, but its origin as a blogging platform made it unsuitable for the relatively complex website we were planning. </li><li>So we first installed Joomla, and started building a rough prototype of our website. To do this, we needed to choose a Content Construction Kit (CCK). While Joomla is itself a standard tool, it comes with a choice of many CCKs. Seblod, as our preferred CCK was named, is only one of many possibilities, and Joomla + Seblod is not that standard.</li><li>Therefore, we finally chose Drupal, in spite of its reputation of being difficult to learn. The main advantages of Drupal are being free and open source (including the modules), and typically having one standard module for each feature. </li></ul><h2> </h2><h2>Building the website </h2><br />Once we had Drupal installed, building the website was done in four phases. In practice the phases are not completely done in the logical order: for instance, some contents have to be filled in early on for testing purposes.<br /><br /><h3>Phase 1: Structuring the database</h3><br />Here come the crucial and irreversible decisions: defining objects, that is collections of data in well-defined formats.<br /><br />For example, our basic object is a 'course'. A 'course' has one or more 'speakers', where a 'speaker' is a 'person' plus an 'affiliation'. A 'course' has one or more 'lectures', where a 'lecture' has a 'date' and a 'place'. A 'course' also has an 'abstract', and a number of 'topics', where a 'topic' is a term that belongs to a well-defined list.<br /><br />These structures actually result from choices that are non-trivial and very important. For example, it was not clear from the start whether the basic object should be the 'course' or the 'lecture'. Having the 'course' as the basic object implies having to figure out how to find the 'course' to which any given 'lecture' belongs. It also implies that individual 'lectures' cannot have their own 'abstracts'.<br /><br /><h3>Phase 2: Structuring the website</h3><br />Now we should decide which information is displayed on which page, and how the pages are organized. This involves<br /><ul><li>selecting certain types of information ('speakers', 'dates', 'topics', etc), </li><li>choosing display formats ('table', 'calendar', etc), </li><li>applying filters, such as selecting courses on a certain topic, or with dates in a certain interval. It is possible to expose some filters, so that the website's visitors can use them. </li></ul>For example, the <a href="https://courses.ipht.cnrs.fr/?q=en/academic-year">program</a> is obtained by selecting the titles, speakers and dates of all courses that take place in the current academic year. Notice how speakers are displayed as 'name (affiliation)'.<br /><br /><h3>Phase 3: Graphics</h3><br />The website's appearance is based on a standard 'theme': a combination of geometrical positionings of objects, of choices of fonts for the texts, and of styles for elements such as buttons. Once a 'theme' is chosen (in our case, 'Corporate clean'), it comes with many parameters, such as colors. Moreover, it is sometimes desirable to intervene in the CSS code in order to fine-tune the appearance. <br /><br /><h3>Phase 4: Filling in the contents</h3><br />Entering the 95 courses that have taken place since 1996 was a good test of the website's convenience. And we were not disappointed, as most issues were with the contents (where do we recover the needed information on old courses? which topics do we assign to each course?), rather than with the website itself.<br /><br /><h2>Conclusion </h2><br />Building a nice website is doable and interesting, but time-consuming when one has to start from scratch, with the selection and learning of Drupal. The effort will become more profitable if our newfound familiarity with Drupal is applied to other projects, or if our website, or clones thereof, are used for more series of courses. Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-65005894653467793552016-04-12T06:46:00.001-07:002016-04-12T06:51:26.412-07:00The light asymptotic limit of $W$ algebra conformal blocks<span class="math inline">\(W\)</span> algebras are natural extensions of the Virasoro algebra, the symmetry algebra of local conformal field theories in two dimensions. Conformal field theories with <span class="math inline">\(W\)</span> algebra symmetry include <span class="math inline">\(W\)</span> minimal models and conformal Toda theories, which are generalizations of Virasoro minimal models and Liouville theory respectively. In particular, <span class="math inline">\(sl_N\)</span> conformal Toda theory is based on the <span class="math inline">\(W_N\)</span> algebra, which has <span class="math inline">\(N-1\)</span> generators with spins <span class="math inline">\(2,3,\dots, N\)</span>, and reduces to the Virasoro algebra in the case <span class="math inline">\(N=2\)</span>.<br /><br /><h4 id="the-problem-of-solving-conformal-toda-theory">The problem of solving conformal Toda theory</h4><h4 id="the-problem-of-solving-conformal-toda-theory"> </h4>Solving <span class="math inline">\(sl_{N\geq 3}\)</span> conformal Toda theory is an outstanding problem. One may think that this is due to the complexity of the <span class="math inline">\(W_N\)</span> algebra, with its quadratic commutators. I would argue that this is rather due to the complexity of the fusion ring of <span class="math inline">\(W_{N}\)</span> representations, with its infinite fusion multiplicities. Due to these fusion multiplicities, solving <span class="math inline">\(sl_N\)</span> conformal Toda theory does not boil down to computing three-point function of primary fields: rather, one should also compute three-point functions of infinitely many descendent fields.<br /><a name='more'></a><br /><br /><h4 id="the-interest-of-the-light-asymptotic-limit">The interest of the light asymptotic limit</h4><h4 id="the-interest-of-the-light-asymptotic-limit"> </h4>This is where the light asymptotic limit comes to the rescue. In this limit, the infinite-dimensional <span class="math inline">\(W_N\)</span> algebra reduces to the finite-dimensional <span class="math inline">\(sl_N\)</span> algebra, and <span class="math inline">\(sl_N\)</span> conformal Toda theory becomes tractable, while infinite fusion multiplicities are still present. In the case <span class="math inline">\(N=3\)</span> for example, four-point conformal blocks <a href="https://arxiv.org/abs/1109.6764">can be computed</a> as three-dimensional integrals in this limit.<br />In their <a href="https://arxiv.org/abs/1602.04829">recent article</a>, Hasmik Poghosyan, Rubik Poghossian and Gor Sarkissian have further shown how formulas for conformal blocks from the AGT correspondence simplify in the light asymptotic limit. More specifically, they have shown that in the light asymptotic limit, certain <span class="math inline">\(W_N\)</span> four-point blocks can be written as sums over <span class="math inline">\(\frac{N(N-1)}{2}\)</span> positive integers. The <span class="math inline">\(W_N\)</span> four-point blocks in question involve two generic fields and two almost fully degenerate fields, so they are not the most general <span class="math inline">\(W_N\)</span> four-point blocks. Rather, they constitute one of two classes of <span class="math inline">\(W_N\)</span> four-point blocks for which explicit formulas are known from the AGT correspondence. (There are two classes because there are two varieties of almost fully degenerate fields.)<br />The sum over <span class="math inline">\(\frac{N(N-1)}{2}\)</span> integers is a feature of the <span class="math inline">\(W_N\)</span> algebra, not of these particular four-point blocks. It is expected that in the light asymptotic limit, any <span class="math inline">\(W_N\)</span> four-point block can be written as a sum over <span class="math inline">\(\frac{N(N-1)}{2}\)</span> integers, or equivalently as an integral over <span class="math inline">\(\frac{N(N-1)}{2}\)</span> real variables. The results of Poghosyan, Poghossian and Sarkissian provide substantial support for this conjecture.<br /><br /><h4 id="suggestive-formulas-from-the-agt-correspondence">Suggestive formulas from the AGT correspondence</h4><h4 id="suggestive-formulas-from-the-agt-correspondence"> </h4>Thanks to <a href="https://arxiv.org/abs/1409.6313">work by Vladimir Mitev and Elli Pomoni</a>, there is some hope that the AGT correspondence can be used for computing arbitrary correlation functions in conformal Toda theory, and not just particular correlation functions with no fusion multiplicities. When given a (very complicated) formula for a Toda correlation function, the challenge will be to understand how it can be decomposed into structure constants and conformal blocks. This decomposition depends on how fusion multiplicity is parametrized. So far, a simple parametrization of fusion multiplicity is <a href="https://arxiv.org/abs/1109.6764">known</a> only in the light asymptotic limit. So this limit may have a useful role to play in solving conformal Toda theory: either as a toy model and source of inspiration, or as the starting point of a perturbative expansion.Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0tag:blogger.com,1999:blog-9119793002820072645.post-62863723411223326582016-04-02T08:19:00.000-07:002016-04-02T08:20:22.880-07:00Perverse bibliometrics: the case of patentsBibliometrics, the counting of publications and citations, is being used for evaluating researchers, research institutions, and academic journals. But simple bibliometric indicators can be gamed, and complex indicators lack transparency. No known indicator avoids these two problems, while some indicators (such as the journal impact factor) manage to have both. As a result, the misuse of bibliometrics has been <a href="http://www.ascb.org/dora/">widely denounced</a>.<br /><br />In spite of these problems with bibliometrics, someone had the idea to do bibliometrics with patents, in order to rank research institutions. The result is Reuters' list of <a href="http://www.reuters.com/article/us-innovation-rankings-idUSKCN0WA2A5">the world's most innovative research institutions</a>, which is topped by <span id="articleText">the Alternative Energies and Atomic Energy Commission (CEA). The methodolog<a href="https://www.blogger.com/null">y</a> for establishing the list is not known in detail, but we do know that it is involves <a href="http://www.reuters.com/innovation/most-innovative-institutions/methodology">10 different criterions</a>, and is mainly based on the numbers of patents and citations thereof. </span><br /><a name='more'></a><br /><span id="articleText"><br /></span><span id="articleText">Who would give credence to such a list? First of all, of course, the <a href="http://ceasciences.fr/Phocea/Vie_des_labos/Ast/ast.php?t=actualites&id_ast=1535">institutes that belong to it</a>. Their place on the list is not due to chance, but rather to policies in favour of patenting, such as helping and inciting their employees to write patents (as is done for example by CEA). Reuters' tools for counting patents help these institutes sell themselves to the public and to their funders. In return, these institutes subscribe to Reuters' tools, and trumpet Reuters' list. </span><br /><br /><span id="articleText">But the interest of bibliometrics is even more debatable in the case of patents than in the case of research articles. The point of patents is to make money from inventions, so why not evaluate their profitability instead? Well, maybe because patents are <a href="http://www.nytimes.com/2007/07/15/business/yourmoney/15proto.html?_r=0">actually not profitable</a>, as writing patents, filing patents, and defending patents in court cost much time and money. This would explain why some research institutes prefer counting patents and citations thereof. </span><br /><br /><span id="articleText">Actually, even if their patents were profitable, it is not clear whether publicly funded research institutions should be patenting their inventions, as opposed to making them freely available. There is a <a href="https://ec.europa.eu/programmes/horizon2020/en/h2020-section/open-science-open-access">growing consensus</a> that publicly funded research articles should be freely accessible, why should this not apply to publicly funded inventions?</span><br /><br />In any case, it makes little sense to patent anything patentable, as some research institutions seem intent on doing, and as Reuters' list certainly encourages them to do. Patents should be governed by a reasonable strategy: for example, SpaceX <a href="http://www.businessinsider.com/elon-musk-patents-2012-11?IR=T">does not patent</a>, Tesla motors has patents but <a href="https://www.teslamotors.com/blog/all-our-patent-are-belong-you">does not enforce them</a>, and Apple used to patent little, but has since changed strategy and engaged in <a href="https://en.wikipedia.org/wiki/Smartphone_patent_wars">patent wars</a>. Of course the strategy of a research institution must differ from the strategy of a company that actually builds and sells products. But maximizing one's patent count is not even a strategy, much less a good strategy. <br /><br /><br /><br />Sylvain Ribaulthttps://plus.google.com/110921736313805248561noreply@blogger.com0