Increases in display resolution, frame rate, and bit depth, particularly with advances in stereoscopic 3D (S3D) displays, have increased demand for efficient compression throughout the imaging pipeline. To meet such requirements, typically the aim is to reduce bandwidth while presenting content that is visually indistinguishable from the original uncompressed versions. Subjective image quality assessment is essential and multiple methods have been proposed. Of these, the ISO/IEC 29170‐2 flicker paradigm is a rigorous method used to define visually lossless performance. However, it is possible that the enhanced sensitivity to artifacts in the presence of flicker does not predict visibility under natural viewing conditions. Here, we test this prediction using high‐dynamic range S3D images and video under flicker and non‐flicker protocols. As hypothesized, sensitivity to artifacts was greater when using the flicker paradigm, but no differences were observed between the non‐flicker paradigms. Results were modeled using the Pyramid of Visibility, which predicted artifact detection driven by moderately low spatial frequencies. Overall, our results confirm the flicker paradigm is a conservative estimate of visually lossless behavior; it is highly unlikely to miss artifacts that would be visible under normal viewing. Conversely, artifacts identified by the flicker protocol may not be problematic in practice.