High-performance computing (HPC) achieved an astonishing three orders of magnitude performance improvement per decade for three decades, thanks to hardware technology scaling resulting in exponential improvement in the rate of floating point executions, though slowing in the most recent. Captured in the Top500 list, this hardware evolution cascaded through the software stack, triggering changes at all levels, including the redesign of numerical linear algebra libraries. HPC simulations on massively parallel systems are often driven by matrix computations, whose rate of execution depends on their floating point precision. Referred to by Jack Dongarra, the 2021 ACM A.M. Turing Award Laureate, as "responsibly reckless" matrix algorithms, we highlight the implications of mixed-precision (MP) computations for HPC applications. Introduced 75 years ago, long before the advent of HPC architectures, MP numerical methods turn out to be paramount for increasing the throughput of traditional and artificial intelligence (AI) workloads beyond riding the wave of the hardware alone. Reducing precision comes at the price of trading away some accuracy for performance (reckless behavior) but in noncritical segments of the workflow (responsible behavior) so that accuracy requirements of the application can still be satisfied. They offer a valuable performance/accuracy knob and, just as they are in AI, they are now indispensable in the pursuit of knowledge and discovery in simulations. In particular, we illustrate the MP impact on three representative HPC applications related to seismic imaging, climate/environment geospatial predictions, and computational astronomy.