Transparency in research reporting is crucial for evaluating the reproducibility and validity of research, including potential confounding factors (internal validity) and generalizability (external validity). Here, we focus on visual stimuli—stimuli routinely used to elicit mental processes and behaviors—as a case study to systematically assess and evaluate current practices in reporting visual characteristics, including display setup, stimulus size, luminance/color, and contrast. Our first study scrutinized recent publications (N = 360) in leading journals in neuroscience and psychology—spanning vision, cognitive, clinical, developmental, and social/personality psychology. The second study examined recent publications (N = 114) on visual attentional bias in clinical samples, involving tasks known to be sensitive to visual properties. Analyzing the full text and supplemental materials of these articles, the two studies reveal a systematic lapse in current practices of reporting characteristics of visual stimuli. This reporting failure was not due to authors making visual materials available online, which was rare (<20%) and could not replace the reporting of visual characteristics. Failure to report stimulus properties hinders efforts to build cumulative science: (a) direct replications become challenging if not impossible; (b) internal validity may be compromised; and (c) generalizability across stimulus properties is prematurely assumed, and its evaluation is precluded in the first place. Our findings have immediate implications for journal policies on reporting practices, urging for explicit emphasis on transparent reporting of stimulus properties, particularly when perceptual components are involved. To assist in this effort, we provide templates for reporting study setup, visual displays, and visual stimuli.