Research software drives scientific progress, but evaluating its quality is no simple task. Learn why this evaluation is vital and how the CDUR approach helps raise the bar for research software.
| Resource information | Details |
|---|---|
| Article Title | On the Evaluation of Research Software: The CDUR procedure |
| Authors | Teresa Gomez-Diaz and Tomas Recio |
| Focus | Research Software and Research Software Evaluation and Citation |
| Presentation Language | English |
On the Evaluation of Research Software: The CDUR procedure distinguishes research software (RS) from more general scientific software as software explicitly developed for research purposes. Beyond clarifying what RS is, the authors delve into the actors, researchers, developers and other contributors, as well as ambiguous space RS occupies between research results, tools and publications. This framing helps readers appreciate the inherent challenges of evaluating RS.
Traditionally, research evaluation is centered on papers, citations, and grants—metrics that do not translate easily to software. As the paper points out, the RS ecosystem involves many actors, from code contributors and domain scientists to team leaders and project managers. Not all of these individuals write code, yet their input can be critical to the project and the resulting science. The authors therefore emphasize the difficulty of applying conventional authorship and valuation models to software-based research.
The proposed CDUR protocol — standing for Citation, Dissemination, Use, and Research — offers a structured, four-step approach to evaluating RS:
(C) Citation: Ensures that the software and its authors are properly cited, with attention to intellectual property and affiliation issues.
(D) Dissemination: Evaluates how the software is shared, emphasizing open-source licensing and best practices in dissemination — key aspects of Open Science.
(U) Use: Examines usability, correctness, and reproducibility, all of which can be improved through sound software-engineering practices.
(R) Research: Assesses the scientific impact and quality of the research that the software enables.
The paper presents a thoughtful and practical framework for recognizing research software as a first-class scientific artifact. By combining principles of citation, openness, usability, and impact, the CDUR procedure bridges the gap between traditional research assessment and the realities of modern, software-driven science.
Note: Portions of this article were edited with the help of AI tools.


