Reproducibility plays a very important role in CSE, especially in this era where research is perhaps shared chiefly via scientific publications - it ensures transparency in the experimentation being done and instills confidence and integrity in the underlying scientific study.
Reproducibility is the ability to repeat an experiment and obtain results that substantially match results from executing the same experiment. Within the context of computational science, three scenarios are common:
Scenario I
A research team produced a set of computational results one year ago. Using the same input data and the same computational environment, the same team is able to produce the same results today, with an acceptable degree of variation.
Scenario II
A research team produced and published a set of computational results. Using the published material and related artifacts, tools and environments, an independent team is able to produce the same results today, with an acceptable degree of variation and with an amount of effort that encourages reuse of the published material.
Scenario III
A research team produced and published a set of computational results. Starting with the same assumptions and conditions, an independent team is able to confirm the results using input data and a computational environment that is different from that of the original team.
For scenarios I and II, reproducibility requirements demand a detailed accounting and retention of algorithms, software, and input data used to produce the initial computational results. Therefore, reproducibility increases incentives for computational science teams to invest in high-quality software and data management practices.