Visual attention enables us to engage selectively with the most important events in the world around us. Yet, sometimes, we fail to notice salient events. "Change blindness" -the surprising inability to detect and identify salient changes that occur in flashing visual images -enables measuring such failures in a laboratory setting. We discovered that human participants (n=39) varied widely (by twofold) in their ability to detect changes when tested on a laboratory change blindness task. To understand the reasons for these differences in change detection abilities, we characterized eyemovement patterns and gaze strategies as participants scanned these images. Surprisingly, we found no systematic differences between scan paths, fixation maps or saccade patterns between participants who were successful at detecting changes, versus those who were not. Yet, two lowlevel gaze metrics -the mean fixation duration and the variance of saccade amplitudessystematically predicted change detection success. To explain the mechanism by which these gaze metrics could influence performance, we developed a neurally constrained model, based on the Bayesian framework of sequential probability ratio testing (SPRT), which simulated gaze strategies of successful and unsuccessful observers. The model's ability to detect changes varied systematically with mean fixation duration and saccade amplitude variance, closely mimicking observations in the human data. Moreover, the model's success rates correlated robustly with human observers' success rates, across images. Our model explains putative human attention mechanisms during change blindness tasks and provides key insights into effective strategies for shifting gaze and attention for artificial agents navigating dynamic, crowded environments.
Author SummaryOur brain has the remarkable capacity to pay attention, selectively, to the most important events in the world around us. Yet, sometimes, we fail spectacularly to notice even the most salient events.We tested this phenomenon in the laboratory with a change-blindness experiment, by having participants freely scan and detect changes across discontinuous image pairs. Participants varied widely in their ability to detect these changes. Surprisingly, their success correlated with differences in low-level gaze metrics. A Bayesian model of eye movements, which incorporated neural constraints on stimulus encoding, could explain the reason for these differences, and closely mimicked human performance in this change blindness task. The model's gaze strategies provide relevant insights for artificial, neuromorphic agents navigating dynamic, crowded environments.