Angle perception is an important middle-level visual process, combining line features to generate an integrated shape percept. Previous studies have proposed two theories of angle perception-a combination of lines and a holistic feature following Weber's law. However, both theories failed to explain the dual-peak fluctuations of the just-noticeable difference (JND) across angle sizes. In this study, we found that the human visual system processes the angle feature in two stages: first, by encoding the orientation of the bounding lines and combining them into an angle feature; and second, by estimating the angle in an orthogonal internal reference frame (IRF). The IRF model fits well with the dual-peak fluctuations of the JND that neither the theory of line combinations nor Weber's law can explain. A statistical image analysis of natural images revealed that the IRF was in alignment with the distribution of the angle features in the natural environment, suggesting that the IRF reflects human prior knowledge of angles in the real world. This study provides a new computational framework for angle discrimination, thereby resolving a long-standing debate on angle perception.
When moving around in the world, the human visual system uses both motion and form information to estimate the direction of self-motion (i.e., heading). However, little is known about cortical areas in charge of this task. This brain-imaging study addressed this question by using visual stimuli consisting of randomly distributed dot pairs oriented toward a locus on a screen (the form-defined focus of expansion [FoE]) but moved away from a different locus (the motion-defined FoE) to simulate observer translation. We first fixed the motion-defined FoE location and shifted the form-defined FoE location. We then made the locations of the motion-and the form-defined FoEs either congruent (at the same location in the display) or incongruent (on the opposite sides of the display). The motion-or the form-defined FoE shift was the same in the two types of stimuli, but the perceived heading direction shifted for the congruent, but not for the incongruent stimuli. Participants (both sexes) made a task-irrelevant (contrast discrimination) judgment during scanning. Searchlight and ROI-based multivoxel pattern analysis revealed that early visual areas V1, V2, and V3 responded to either the motion-or the form-defined FoE shift. After V3, only the dorsal areas V3a and V3B/KO responded to such shifts. Furthermore, area V3B/KO shows a significantly higher decoding accuracy for the congruent than the incongruent stimuli. Our results provide direct evidence showing that area V3B/KO does not simply respond to motion and form cues but integrates these two cues for the perception of heading.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.