2022
DOI: 10.1109/msp.2022.3182929
|View full text |Cite
|
Sign up to set email alerts
|

The SONICOM Project: Artificial Intelligence-Driven Immersive Audio, From Personalization to Modeling [Applications Corner]

Abstract: Every individual perceives spatial audio differently due in large part to the unique and complex shape of their ears and head. Therefore, high-quality headphone-based spatial audio should be uniquely tailored to each listener in an effective and efficient manner. Artificial Intelligence (AI) is a powerful tool that can be used to drive forward research in spatial audio personalisation. The SONICOM project aims to employ a data-driven approach to link the physiological characteristics of the ear to the individu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…The SONICOM dataset [5,20] is a publicly-released HRTF dataset which aims at facilitating reproducible research in the spatial acoustics and immersive audio domain by including in a single database HRTFs measured from an increasingly large number of subjects (200 subjects were used in this work).…”
Section: Sonicom and Arimentioning
confidence: 99%
“…The SONICOM dataset [5,20] is a publicly-released HRTF dataset which aims at facilitating reproducible research in the spatial acoustics and immersive audio domain by including in a single database HRTFs measured from an increasingly large number of subjects (200 subjects were used in this work).…”
Section: Sonicom and Arimentioning
confidence: 99%
“…Created within the remit of the SONICOM project [19], 1 this technical paper introduces the publicly-released SONI-COM HRTF dataset, which aims at facilitating reproducible research (i.e., allowing researchers to ensure that they can repeat the same analysis multiple times with the same results [20]) in the spatial acoustics and immersive audio domain by including in a single database HRTFs measured from an increasingly large number of subjects (currently 120), as well as headphones transfer functions; 3D scans of ears, head, and torso; and RGB + depth pictures at different angles around the subject's head.…”
Section: Introductionmentioning
confidence: 99%