The conventional one‐at‐a‐time strategy to evaluate catalysts is inefficient and resource intensive. Even a fractional factorial design takes weeks to control for temperature, pressure, composition, and stability. Furthermore, quantifying day‐to‐day variability and data quality exacerbates the time sink. High‐throughput catalyst testing (HTCT) with as many as 64 parallel reactors reduces experimental time by two orders of magnitude and decreases the variance, as it is capable of quantifying random errors. This approach to heterogeneous catalyst development requires dosing each reactor precisely with the same flow and composition, controlling the temperature and identifying the isothermal zone, on‐line analysis of the gas‐phase, and a common back pressure regulator to maintain constant pressure. Silica capillary or microfluidic distribution chip manifolds split a common feed stream precisely (standard deviation <0.5%), thus guaranteeing reproducibility. With these systems, experimenters shift their focus from operating a single reactor to careful catalyst synthesis, sometimes delicate catalytic reactor loadings, and the handling, processing, and analysis of massive amounts of data. This review presents an overview of the main elements of HTCT, its applications, potential sources of uncertainty, and a set of best practises derived from scientific literature and research experience. A bibliometric analysis of articles Web of Science indexed from 2015–2020 grouped research into five clusters: (1) discovery, directed evolution, enzymes, and complexes; (2) electrocatalysis, reduction, adsorption, and nanoparticles; (3) hydrogen, oxidation, and stability; (4) DFT, combinatorial chemistry, and NH3; and (5) graphene and carbon.