Infrared thermographs (IRTs) are commonly used during disease pandemics to screen individuals with elevated body temperature (EBT). To address the limited research on external factors affecting IRT accuracy, we conducted benchtop measurements and computer simulations with two IRTs, with or without an external temperature reference source (ETRS) for temperature compensation. The combination of an IRT and an ETRS forms a screening thermograph (ST). We investigated the effects of viewing angle (θ, 0–75°), ETRS set temperature (TETRS, 30–40 °C), ambient temperature (Tatm, 18–32 °C), relative humidity (RH, 15–80%), and working distance (d, 0.4–2.8 m). We discovered that STs exhibited higher accuracy compared to IRTs alone. Across the tested ranges of Tatm and RH, both IRTs exhibited absolute measurement errors of less than 0.97 °C, while both STs maintained absolute measurement errors of less than 0.12 °C. The optimal TETRS for EBT detection was 36–37 °C. When θ was below 30°, the two STs underestimated calibration source (CS) temperature (TCS) of less than 0.05 °C. The computer simulations showed absolute temperature differences of up to 0.28 °C and 0.04 °C between estimated and theoretical temperatures for IRTs and STs, respectively, considering d of 0.2–3.0 m, Tatm of 15–35 °C, and RH of 5–95%. The results highlight the importance of precise calibration and environmental control for reliable temperature readings and suggest proper ranges for these factors, aiming to enhance current standard documents and best practice guidelines. These insights enhance our understanding of IRT performance and their sensitivity to various factors, thereby facilitating the development of best practices for accurate EBT measurement.