2022
DOI: 10.1051/aacus/2022032
|View full text |Cite
|
Sign up to set email alerts
|

Auditory-visual scenes for hearing research

Abstract: While experimentation with synthetic stimuli in abstracted listening situations has a long standing and successful history in hearing research, an increased interest exists on closing the remaining gap towards real-life listening by replicating situations with high ecological validity in the lab. This is important for understanding the underlying auditory mechanisms and their relevance in real-life situations as well as for developing and evaluating increasingly sophisticated algorithms for hearing assistance.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 54 publications
0
9
0
Order By: Relevance
“…In this work, we described what steps need to be taken to implement VR scenes for hearing device fine-tuning. It shows that considerable effort was needed to further develop the virtual environments as they were published in [8] to VR scenes that can be used for hearing device finetuning in the clinic. We therefore decided to publish the VR scenes as an application that is ready-to-use.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…In this work, we described what steps need to be taken to implement VR scenes for hearing device fine-tuning. It shows that considerable effort was needed to further develop the virtual environments as they were published in [8] to VR scenes that can be used for hearing device finetuning in the clinic. We therefore decided to publish the VR scenes as an application that is ready-to-use.…”
Section: Discussionmentioning
confidence: 99%
“…The virtual pub is based on the pub presented in [8,17]. Diffuse babble noise fitting this environment was generated by calculating the sum of the convolutions of the recorded IRs from the pub [17] with conversation snippets.…”
Section: Implementation Of Virtual Acoustic Scenesmentioning
confidence: 99%
See 2 more Smart Citations
“…Although it was only tested with lateralization and localization experiments (see also Supplementary materials) in this study, by selecting different stage combinations and different decision models, its applications can go beyond the provided experimental examples. By referring to the MATLAB examples provided, users are able to replace the current functions with customized ones, such as incorporating other HRIRs or BRIRs for complex acoustic environments (e.g., [53,[92][93][94][95][96][97][98]), self-developed AGCs, different coding strategies, different AN models, or different decision models. Each model stage alone can be used or extended for both research and education purposes.…”
Section: Possible Applications With Further Extensionmentioning
confidence: 99%