The international exascale computing community is working toward unprecedented computational resources for next-generation science -- including hardware, system software, libraries, and applications.
When paired with critical advances in CSE algorithms and software, the enormous power of extreme-scale architectures will enable high-fidelity multiscale, multiphysics modeling, simulation, and analysis, leading to accelerated, broadly impactful scientific discoveries. The challenge is how to deliver and effectively use systems that are capable of at least one quintillion (1018) calculations per second. (In comparison, today’s fastest supercomputers operate in the petascale range of 1015 calculations per second.) One approach currently being explored is a holistic approach that coordinates efforts to achieve an integrated exascale ecosystem. For more information, see the following.
The transition of applications to exploit massive on-node concurrency, the requirement to couple physics and scales, and the continued disruption in the underlying hardware, system software, and programming environments -- all these together create the most challenging environment for developing CSE applications in at least two decades. Software is escalating in complexity as a result of increases in system scale and heterogeneity and the demand for fundamental algorithm and software refactoring. In turn this complexity raises daunting issues in deploying, coordinating, extending, maintaining, and effectively using libraries, frameworks, tools, and application-specific infrastructure. Moreover, as scientific applications increase in sophistication, interdisciplinary collaboration using software developed by independent groups becomes essential. This situation brings with it a unique opportunity—and an implicit mandate—to fundamentally change how scientific software is designed, developed, and sustained.