2002
DOI: 10.1101/lm.51702
|View full text |Cite
|
Sign up to set email alerts
|

Learning Directions of Objects Specified by Vision, Spatial Audition, or Auditory Spatial Language: Figure 1.

Abstract: The modality by which object azimuths (directions) are presented affects learning of multiple locations. In Experiment 1, participants learned sets of three and five object azimuths specified by a visual virtual environment, spatial audition (3D sound), or auditory spatial language. Five azimuths were learned faster when specified by spatial modalities (vision, audition) than by language. Experiment 2 equated the modalities for proprioceptive cues and eliminated spatial cues unique to vision (optic flow) and a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
31
0

Year Published

2002
2002
2014
2014

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 35 publications
(32 citation statements)
references
References 25 publications
1
31
0
Order By: Relevance
“…In contrast, very few spatial errors, that is, the selection of an adjacent loudspeaker instead of the correct one, were reported. In a similar study, Klatzky et al (2002) presented three or five words in sequence from three or five loudspeakers placed at least 30° apart. Each word was presented through a specific loudspeaker, and the listeners' task was to associate specific words with specific sound sources.…”
Section: Localization Of Multiple Sound Sourcesmentioning
confidence: 99%
“…In contrast, very few spatial errors, that is, the selection of an adjacent loudspeaker instead of the correct one, were reported. In a similar study, Klatzky et al (2002) presented three or five words in sequence from three or five loudspeakers placed at least 30° apart. Each word was presented through a specific loudspeaker, and the listeners' task was to associate specific words with specific sound sources.…”
Section: Localization Of Multiple Sound Sourcesmentioning
confidence: 99%
“…Despite the disadvantage of spatial language documented by Klatzky et al (2002), a study by Loomis et al (2002) provided evidence that once spatial representations are formed, they appear to be similar regardless of the input modality. In that study, participants first encoded the location of a target either through 3-D sound or spatial language and then walked to the target without vision along direct and indirect paths.…”
mentioning
confidence: 99%
“…and a single sound source, leaving auditory memory for spatial layout largely untouched. For example, Loomis, Klatzky, Philbeck, and Golledge (1998) and Klatzky, Lippa, Loomis, and Golledge (2002) presented multiple auditory targets sequentially in different locations to stationary observers and had them indicate egocentric distances (Loomis et al, 1998) or directions of sound sources individually for each target. Thus, the previous studies did not investigate how spatial relations among sound locations are encoded and represented in memory.…”
mentioning
confidence: 99%