From floating-point representation to iterative solvers, approximation has always been embedded in scientific computing. But today, hardware trends, especially those driven by AI workloads, bring that tradeoff to the forefront. The question is no longer if we can tolerate approximation, but how much and where. This shift redefines how we think about performance in future HPC systems.