2014
DOI: 10.1109/tim.2014.2308051
|View full text |Cite
|
Sign up to set email alerts
|

Advanced Binaural Sound Localization in 3-D for Humanoid Robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
24
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(24 citation statements)
references
References 22 publications
0
24
0
Order By: Relevance
“…SBL beamforming assumes a predefined spatial mapping between the sources and the sensors to infer the DOAs directly from the reconstructed source vector, as opposed to methods (including SBL-based 7 ) which infer the DOA of a single target talker indirectly through the estimation of the relative transfer function between a pair of microphones. 4,6 It is demonstrated both with simulations and experimental data that SBL beamforming offers unambiguous source localization outperforming traditional beamforming methods especially for correlated signals and single-snapshot measurements. The high-resolution SBL reconstruction offers not only speech enhancement over noise, but also speech separation between competing talkers.…”
mentioning
confidence: 97%
“…SBL beamforming assumes a predefined spatial mapping between the sources and the sensors to infer the DOAs directly from the reconstructed source vector, as opposed to methods (including SBL-based 7 ) which infer the DOA of a single target talker indirectly through the estimation of the relative transfer function between a pair of microphones. 4,6 It is demonstrated both with simulations and experimental data that SBL beamforming offers unambiguous source localization outperforming traditional beamforming methods especially for correlated signals and single-snapshot measurements. The high-resolution SBL reconstruction offers not only speech enhancement over noise, but also speech separation between competing talkers.…”
mentioning
confidence: 97%
“…Some of these issues and applications have been focused in the last ten years, such as: modeling the localization of sounds in the azimuthal half-plane (Raspaud et al, 2010;Willert et al, 2006); sound localization in the presence of noise and reverberation (Devore et al, 2009;May et al, 2012;Woodruff and Wang, 2012;; robotic sound source localization (Keyrouz, 2014;Liu and Meng, 2008). However, most approaches up to now are based on experimental cochleagrams or head related transfer functions (HRTF) whose determination demands the use of an anechoic chamber and the introduction of microphones inside the listener's eardrum, besides other procedure constraints.…”
Section: Discussionmentioning
confidence: 99%
“…In order to model the direction dependency of the frequency response of the ears, HRTF have been developed [6]. These functions summarize the filtering effect of head, torso and pinna, for each direction and distance of a sound source [7], [8]. Localization then becomes a search for the pair of left and right HRTF that produces the highest correlation value when applied to the incoming signals.…”
Section: Introductionmentioning
confidence: 99%
“…Examples of these implementations include an accurate human head replica [7], the outer structure of a robot head [2], [17] and a widely separated array of two microphones with small ear-like structures [18]. Shieh et al [19] used a 12-microphone 19 cm semi-spherical array to localize sounds, in which a set of fuzzy-logic estimators use the level at each microphone to determine the sound direction.…”
Section: Introductionmentioning
confidence: 99%