Reviewing with awareness, integrity, and care is a responsibility we owe to each other and to the future of scientific discovery though computation.
In scientific software, reviewing is more than a service. It is an act of stewardship. Whether we are evaluating papers, proposals, or software contributions, how we review shapes the trajectory of research, influences funding decisions, and signals what our community values.
This blog post advocates for conscious reviewing: engaging in review activities with awareness, integrity, and care. Conscious reviewing is about more than checking a box or adding a line to one's CV; it is a responsibility we owe to each other and to the future of scientific discovery using computation.
Reviewing is a commitment, not a line on a CV
Reviews and scores are the academic currency of the scientific work. They affect funding decisions, publication acceptances, and ultimately careers in research. Agreeing to review is not something we should do lightly. It is a commitment of time and a responsibility to our peers. Being a reviewer is not a line to pad one's service record.
Recommendation: Accept review invitations only if you have the bandwidth and the intention to provide a constructive, thoughtful review. Declining when one is too busy is far better than rushing through a superficial review at the last minute. Rushing with a review is not only detrimental to science; a rushed review devalues the work of our peers.
Maintenance and incremental work deserve respect
There is a tendency to view incremental or maintenance-oriented research as “not novel enough.” In the context of scientific software, this is particularly problematic. Updates that improve software tools' usability, performance, or sustainability may not be flashy, but they are foundational to long-term scientific impact. Conscious reviewing should value these contributions appropriately. Not every paper needs to revolutionize the field directly. Community tools improve incrementally with feedback from their users and are more likely to result in revolutionary discoveries by enabling better exploration. Therefore, the efforts to improve the tools deserve visibility and support.
Recommendation: Review with humility and empathy. Ask: What is the actual contribution? Is it valuable to the community—even if it's not flashy? Is the tool critical for scientists? Will making it work better improve its impact on scientific discovery?
A "poor" proposal score is more than just a rating
In the context of grant reviewing, a “Poor” (P) rating for a research proposal carries strong implications. While the proposal review guidelines are clear, such a rating reflects serious deficiencies in intellectual merit, broader impacts, or feasibility—many new reviewers (and even some experienced ones) may not fully realize what “Poor” actually signals. In simple terms, “Poor” can be interpreted as: Do NOT show me this research again. That’s a strong message, and we must be confident in that level of rejection before assigning it.
Recommendation: Use extreme scores—whether for papers or proposals—only when justified with clear, actionable, and fair reasoning. If we do not want to receive a review like that ourselves, consider rephrasing or reevaluating the tone of the review.
Rejecting papers: be firm, but fair
Not all submissions will meet the bar for acceptance, and it is the reviewer’s job to identify those that fall short. But even in rejection, there is an opportunity to nurture future work. A recommendation to reject should never be a dismissal of the author’s value—it should be a reflection on the specific shortcomings of that submission. A “Reject” signals that the paper has essential flaws that prevent acceptance. A “Strong Reject” means the paper is far below community standards. Both should be supported by detailed, constructive feedback, not vague or dismissive comments.
Recommendation: Approach reviewing as a feedback opportunity, not a judgment session. Help the authors understand how to improve, even if the decision is not in their favor.
Ask ourselves: How would we feel receiving this review?
Flipping the perspective is one simple but powerful way to check the tone and fairness of a review. How would this review make you feel if it landed in your inbox? Would you learn something from it? Would you feel the reviewer cared about your work? Or would you feel shut down? Our goal as reviewers should be to improve the quality of work submitted to our conferences, journals, and funding agencies. That does not mean saying "yes" to everything. It means saying “no” with clarity and purpose.
Final thoughts
Conscious reviewing works best when we see it as an act of service – an investment in the health of our research ecosystem. It is a system that thrives when reviewers commit the time, apply community values, and treat each submission as an opportunity to mentor and elevate others. If we want better reviews for our own work, we must write better reviews for others. If we want a community that values both novelty and reliability, we must acknowledge the importance of incremental, maintenance, and software contributions. If we want fairness, we must review with fairness.
Let’s raise the bar—not just for papers and proposals, but for ourselves as reviewers!
Author bios
Michela Taufer, an AAAS Fellow and ACM Distinguished Scientist, holds the Dongarra Professorship in High-Performance Computing at the University of Tennessee, Knoxville. She earned her Laurea in Computer Engineering from the University of Padova, Italy, and her doctoral degree in Computer Science from the Swiss Federal Institute of Technology in Zurich. Her postdoctoral work at the University of California, San Diego and the Scripps Research Institute bridged computer systems and computational chemistry. Her research focuses on cyberinfrastructure solutions that leverage HPC, cloud, and volunteer computing to advance science while promoting reproducibility, transparency, and reusability.
Anshu Dubey is a Senior Computational Scientist in Mathematics and Computer Science Division at Argonne National Laboratory. She is also a Senior Scientist in the Computer Science Department at the University of Chicago. She received her B. Tech from Indian Institute of Technology, New Delhi in 1985, a Ph.D. in computer science from Old Dominion University in 1993, and did her post-doctoral work at the University of Chicago Astronomy & Astrophysics Department. Her research interests include design, architecture, and sustainability of multiphysics scientific software used on high performance computing platforms.