Due to a legacy originating in the limited capability of early computers, the spectroscopic resolution used in Fourier transform infrared spectroscopy and other systems has largely been implemented using only powers of two for more than 50 years. In this study, we investigate debunking the spectroscopic lore of, e.g., using only 2, 4, 8, or 16 cm−1 resolution and determine the optimal resolution in terms of both (i) a desired signal-to-noise ratio and (ii) efficient use of acquisition time. The study is facilitated by the availability of solids and liquids reference spectral data recorded at 2.0 cm−1 resolution and is based on an examination in the 4000–400 cm−1 range of 61 liquids and 70 solids spectra, with a total analysis of 4237 peaks, each of which was also examined for being singlet/multiplet in nature. Of the 1765 liquid bands examined, only 27 had widths <5 cm−1. Of the 2472 solid bands examined, only 39 peaks have widths <5 cm−1. For both the liquid and solid bands, a skewed distribution of peak widths was observed: For liquids, the mean peak width was 24.7 cm−1 but the median peak width was 13.7 cm−1, and, similarly, for solids, the mean peak width was 22.2 cm−1 but the median peak width was 11.2 cm−1. While recognizing other studies may differ in scope and limiting the analysis to only room temperature data, we have found that a resolution to resolve 95% of all bands is 5.7 cm−1 for liquids and 5.3 cm−1 for solids; such a resolution would capture the native linewidth (not accounting for instrumental broadening) for 95% of all the solids and liquid bands, respectively. After decades of measuring liquids and solids at 4, 8, or 16 cm−1 resolution, we suggest that, when accounting only for intrinsic linewidths, an optimized resolution of 6.0 cm−1 will capture 91% of all condensed-phase bands, i.e., broadening of only 9% of the narrowest of bands, but yielding a large gain in signal-to-noise with minimal loss of specificity.