Visual information around us is rarely static. To perform a task in such a dynamic environment, we often have to compare current visual input with our working memory (WM) representation of the immediate past. However, little is known about what happens to a WM representation when it is compared with perceptual input. To test this, we asked young adults ( N = 170 total in three experiments) to compare a new visual input with a WM representation prior to reporting the WM representation. We found that the perceptual comparison biased the WM report, especially when the input was subjectively similar to the WM representation. Furthermore, using computational modeling and individual-differences analyses, we found that this similarity-induced memory bias was driven by representational integration, rather than incidental confusion, between the WM representation and subjectively similar input. Together, our findings highlight a novel source of WM distortion and suggest a general mechanism that determines how WM interacts with new visual input.
Visual information around us is rarely static. To carry out a task in such a dynamic environment, we often have to compare current visual input with our working memory representation of the immediate past. However, little is known about what happens to a working memory (WM) representation when it is compared with perceptual input. Here, we tested university students and found that perceptual comparisons retroactively bias working memory representations toward subjectively-similar perceptual inputs. Furthermore, using computational modeling and individual differences analyses, we found that representational integration between WM representations and perceptually-similar input underlies this similarity-induced memory bias. Together, our findings highlight a novel source of WM distortion and suggest a general mechanism that determines how WM representations interact with new perceptual input.
Objective: Conventional directional hearing aid microphone technology may obstruct listening intentions when the talker and listener walk side by side. The purpose of the current study was to evaluate hearing aids that use a motion sensor to address listening needs during walking. Design: Each participant completed two walks in randomised order, one walk with each of two hearing aid programs: (1) conventional beamformer adaptation that activated an adaptive, multiband beamformer in loud environments and (2) motion-based beamformer adaptation that activated a pinna-mimicking microphone setting when walking was detected. Participants walked along a pre-defined track and completed tasks assessing speech understanding and environmental awareness. Study Sample: Participants were 22 older adults with moderate-to-severe hearing loss and experience using hearing aids. Results: More participants preferred the motion-based than conventional beamformer adaptation for speech understanding, environmental awareness, overall listening, and sound quality (p < 0.05). Measures of speech understanding (p < 0.01) and localisation of sound stimuli (p < 0.05) were significantly better with motion-based than conventional beamformer adaptation. Conclusions: The results suggest that hearing aid users can benefit from beamforming that uses motion sensor input to adapt the signal processing according to the user's activity. The real-world setup of this study had limitations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.