2023
DOI: 10.1016/j.patrec.2022.12.006
|View full text |Cite
|
Sign up to set email alerts
|

Dual quaternion ambisonics array for six-degree-of-freedom acoustic representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…This is necessary to enable the benefits derived from the use of the Hamilton product instead of the regular dot product, as further discussed in Section III. In the audio domain, first-order Ambisonics [51] signals are naturally suited for a quaternion representation, being four-dimensional and presenting strong correlations among the spatial channels, and the application of quaternion networks to problems related to this audio format has already been extensively investigated [50], [52]- [54]. Nevertheless, in the vast majority of cases, audio-related machine-learning tasks deal with monaural signals, which are usually treated as vectors of scalars (time-domain signals), matrices of scalars (magnitude spectrograms), or 3D tensors (complex spectrograms).…”
Section: Introductionmentioning
confidence: 99%
“…This is necessary to enable the benefits derived from the use of the Hamilton product instead of the regular dot product, as further discussed in Section III. In the audio domain, first-order Ambisonics [51] signals are naturally suited for a quaternion representation, being four-dimensional and presenting strong correlations among the spatial channels, and the application of quaternion networks to problems related to this audio format has already been extensively investigated [50], [52]- [54]. Nevertheless, in the vast majority of cases, audio-related machine-learning tasks deal with monaural signals, which are usually treated as vectors of scalars (time-domain signals), matrices of scalars (magnitude spectrograms), or 3D tensors (complex spectrograms).…”
Section: Introductionmentioning
confidence: 99%
“…n-dimensional data without modifications to architectural layers. For that reason, the already ample field of hypercomplex models based on complex [1], quaternion [2], dual quaternion [3,4], and octonion [1] numbers has been permeated by PHNNs. These networks have been defined with different known backbones such as ResNets [5,6], GANs [7,8], graph neural networks [9], and Transformers [10], among others [11,12].…”
Section: Introductionmentioning
confidence: 99%