2018
DOI: 10.1162/jocn_a_01210
|View full text |Cite
|
Sign up to set email alerts
|

Hearing Shapes: Event-related Potentials Reveal the Time Course of Auditory–Visual Sensory Substitution

Abstract: In auditory-visual sensory substitution, visual information (e.g., shape) can be extracted through strictly auditory input (e.g., soundscapes). Previous studies have shown that image-to-sound conversions that follow simple rules [such as the Meijer algorithm; Meijer, P. B. L. An experimental system for auditory image representation. Transactions on Biomedical Engineering, 39, 111-121, 1992] are highly intuitive and rapidly learned by both blind and sighted individuals. A number of recent fMRI studies have begu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…htm). Some researches from cognitive science also tried to explain how the brain processes the sound of VOICE [22,23]. The main drawback of VOICE is that it conveys color images to VIP, which is a very low level sonification, and they usually need a long time to learn how to infer 3D environments from the sound.…”
Section: Related Workmentioning
confidence: 99%
“…htm). Some researches from cognitive science also tried to explain how the brain processes the sound of VOICE [22,23]. The main drawback of VOICE is that it conveys color images to VIP, which is a very low level sonification, and they usually need a long time to learn how to infer 3D environments from the sound.…”
Section: Related Workmentioning
confidence: 99%
“…Various attempts have been made to develop strategies to effectively convey visual information to the blind with auditory stimuli [8][9][10][11][12]. In addition, many studies on visual-auditory sensory substitution have demonstrated the possibility of localizing [13,14] and recognizing objects [15,16], extracting depth and distance [12,17,18], and performing basic navigational tasks [19][20][21].…”
Section: Introductionmentioning
confidence: 99%