The recent San Francisco Declaration On Research Assessment (DORA) aims at improving how scientific research is evaluated. To do this, the declaration wants not only to improve the evaluation process itself, but also to modify how research is reported, in particular by having authors of scientific articles cite original articles in preference to review articles. But this would be bad scientific practice: researchers should not have to worry about the reliability of bibliometrics when they do research or write articles.
As a method of evaluating research, bibliometrics has many well-known shortcomings. The DORA denounces some of these shortcomings, and proposes remedies. Some of these remedies are common-sense ideas, such as stopping using journal impact factors, and evaluating the contents of articles (and other research outputs) rather than relying on bibliometrics.
Some other proposals aim at improving bibliometrics. It is not obvious that this would be good, because bibliometrics will always encourage bad practices, such as trading citations and authorships, and splitting results into small articles. Eliminating some of the flaws at the expense of making metrics more complicated may not be worth the trouble.
And the proposals 10 and 16, which want authors of scientific articles to cite original articles in preference to review articles, are downright pernicious. The problem is no longer to improve bibliometrics itself, but to modify how articles are written, with the aim of making bibliometrics more reliable. So the DORA claims that bibliometrics is so flawed that its use should be much reduced, but that it is so important that great efforts should be made to improve it -- efforts not limited to modifying the metrics themselves. In this sense, the DORA is self-contradictory.
This might not matter too much, if citing original articles was good scientific practice. But it is not. The original literature is a tangled mess of more or less reliable and understandable texts. Review articles play the vital roles of making sense of it, and of promoting common, generally accepted terminology and ideas. Writing review articles is in some cases more useful than doing original research, and should be encouraged. Moreover, researchers often do not learn of existing results from original articles, but from other texts, which may be clearer or more accessible than the original articles. Citing the original articles would often mean citing articles without having read them, which is obviously bad practice.
Remember why articles cite earlier works in the first place: to avoid repeating material which is available elsewhere, and to help readers find the origins and proofs of results which are built upon. To do this efficiently, one should cite as few articles as necessary, and select them based on clarity, reliability and ease of access. (Open access should surely be favoured.) There is no reason to favour original articles. Favouring original articles serves another purpose: as the DORA puts it, to "give credit where credit is due". But the purpose of research articles is not to give credit. An article's contribution to the history of its subject is only a byproduct, and should not take precedence over its primary purpose of reporting scientific ideas.
Bibliometrics was supposed to help with the task of giving credit, but now we are told that it will not work unless we think more about bibliometrics than about readers when we write articles. This should be resisted: write for humans, not for robots.
Thursday, 29 August 2013
Tuesday, 27 August 2013
This blog's initial intentions
This blog is for discussing practices in scientific research as they are and as they should be, and the tools which can help doing research efficiently.
The focus is not to describe the practices which researchers should follow for the sake of their careers, but rather to discuss what should be done in order to do good and useful research. As a theoretical physicist, my definition of useful research will be research whose results are easily accessed, understood and discussed -- so scientific publishing will be a major topic.
It has been obvious for a long time that scientific journals could be replaced with a much more efficient and cheap system. But opinions vary widely as to whether and when this can happen: from the prediction of the imminent doomsday of commercial science publishing, to despair at the slow progress of open access. It is also not clear how a new system could emerge from the current situation: evolutionarily from existing journals? as an Arxiv overlay such as the Selected Papers Network? or as a new creation such as the journal PeerJ?
In some cases, there is a consensus on what the good and bad practices are, and the question is how to switch from the latter to the former: how do we escape rapacious scientific publishers? how do we stop using impact factors and the h-index? In other cases, the identification of good and bad practices is less clear: is it good practice to use Mathematica? should researchers contribute to Wikipedia? should publicly-employed scientists put their writings in the public domain?
Some recommended reading:
The focus is not to describe the practices which researchers should follow for the sake of their careers, but rather to discuss what should be done in order to do good and useful research. As a theoretical physicist, my definition of useful research will be research whose results are easily accessed, understood and discussed -- so scientific publishing will be a major topic.
It has been obvious for a long time that scientific journals could be replaced with a much more efficient and cheap system. But opinions vary widely as to whether and when this can happen: from the prediction of the imminent doomsday of commercial science publishing, to despair at the slow progress of open access. It is also not clear how a new system could emerge from the current situation: evolutionarily from existing journals? as an Arxiv overlay such as the Selected Papers Network? or as a new creation such as the journal PeerJ?
In some cases, there is a consensus on what the good and bad practices are, and the question is how to switch from the latter to the former: how do we escape rapacious scientific publishers? how do we stop using impact factors and the h-index? In other cases, the identification of good and bad practices is less clear: is it good practice to use Mathematica? should researchers contribute to Wikipedia? should publicly-employed scientists put their writings in the public domain?
Some recommended reading:
- Gowers's Weblog.
- Two texts of mine on scientific publishing (in French): Journaux scientifiques: échapper aux abonnements ruineux and Les publications scientifiques au CEA : quelques questions.
- Nielsen's book "Reinventing Discovery" -- Citing it is admittedly bad practice as the text is not freely accessible, but this book is a compelling explanation of how research practices could be radically improved.
Subscribe to:
Posts (Atom)