Friday 31 December 2021

Reforming research assessment: nice declarations, little action?

There seems to be a consensus among universities and research funders that research assessment should not be based on crude quantitative metrics, such as: numbers of articles, numbers of citations, journal impact factors, the h-index, etc. The 2012 San Francisco Declaration on Research Assessment (DORA) formulates principles which could greatly improve research assessment if they were applied, although I would argue that the DORA is misguided in its recommendations to authors. The DORA has been signed by thousands of organizations: just for France this includes the Academy of Sciences, CNRS and HCERES. More recently, the European Commission has issued a report called Towards a reform of the research assessment system, which deals with the same issues and promotes similar principles.

Since the same principles have to be reiterated 9 years later, you may think that little has changed in all that time. And you would be largely right. Significant reforms of research assessment in individual organizations are so rare that they are newsworthy. And some universities are denounced for taking actions that directly contradict the principles they have officially endorsed.

In the case of CNRS, the 2019 Roadmap for Open Science states that “providing a full and complete list of productions is unnecessary”. However, the current form to be filled by candidates to permanent positions includes a complete list of productions. In addition, candidates are asked to provide the following statistics:

  • Number of publications in peer-reviewed journals

  • Number of publications in peer-reviewed conference proceedings

  • Number of books or book chapters

  • Number of theses supervised

  • Number of theses co-supervised

  • Number of invited lectures in international scientific conferences

  • Number of patents

If listing all publications is unnecessary, why would we count them? Hopefully, these statistics play little role in the eventual decisions: after all, the candidates also have to give qualitative information, including research programs. Nevertheless, the roadmap is clearly not reflected in current practice.

Faced with requests for such statistics, what should researchers do? The required numbers are ill-defined: in an “invited lecture in an international conference”, almost each word is ambiguous. Even the h-index depends on who computes it. The principled response is therefore to ignore the requests. One may be afraid to miss an opportunity for employment or promotion. However, would a good researcher really want to work for an employer whose decisions are based on meaningless statistics?

No comments:

Post a Comment