In visual search, participants can incidentally learn spatial target-distractor configurations, leading to shorter search times for repeated compared to novel configurations. Usually, this is tested within the limited visual field provided by a computer monitor. While contextual cueing is typically investigated on two-dimensional screens, we present for the first time an implementation of a classic contextual cueing task (search for a T-shape among L-shapes) in a three-dimensional virtual environment. This enabled us to test if the typical finding of incidental learning of repeated search configurations, manifested by shorter search times, would hold in a three-dimensional virtual reality (VR) environment. One specific aspect that was tested by combining virtual reality and contextual cueing was if contextual cueing would hold for targets outside the initial field of view (FOV), requiring head movements to be found. In keeping with two-dimensional search studies, reduced search times were observed after the first epoch and remained stable in the remaining experiment. Importantly, comparable search time reductions were observed for targets both within and outside of the initial FOV. The results show that a repeated distractors-only configuration in the initial FOV can guide search for target locations requiring a head movement to be seen.
Purpose Retinal implants (RIs) provide new vision for patients suffering from photoreceptor degeneration in the retina. The limited vision gained by RI, however, leaves room for improvement by training regimes. Methods Two groups of normal-sighted participants were respectively trained with videos or still images of daily objects in a labeling task. Object appearance was simulated to resemble RI perception. In Experiment 1 , the training effect was measured as the change in performance during the training, and the same labeling task was conducted after 1 week to test the retention. In Experiment 2 with a different pool of participants, a reverse labeling task was included before (pre-test) and after the training (post-test) to show if the training effect could be generalized into a different task context. Results Both groups showed improved object recognition through training that was maintained for a week, and the video group showed better improvement ( Experiment 1 ). Both groups showed improved object recognition in a different task that was maintained for a week, but the video group did not show better retention than the image group ( Experiment 2 ). Conclusions Training with video materials leads to more improvement than training with still images in simulated RI perception, but this better improvement was specific to the trained task. Translational Relevance We recommend videos as better training materials than still images for patients with RIs to improve object recognition when the task-goal is highly specific. We also propose here that achieving highly specific training goals runs the risk of limiting the generalization of the training effects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.