Interspeech 2023 2023
DOI: 10.21437/interspeech.2023-871
|View full text |Cite
|
Sign up to set email alerts
|

Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces

Oli Danyi Liu,
Hao Tang,
Sharon Goldwater

Abstract: Self-supervised speech representations are known to encode both speaker and phonetic information, but how they are distributed in the high-dimensional space remains largely unexplored. We hypothesize that they are encoded in orthogonal subspaces, a property that lends itself to simple disentanglement. Applying principal component analysis to representations of two predictive coding models, we identify two subspaces that capture speaker and phonetic variances, and confirm that they are nearly orthogonal. Based … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…The research community is actively working on finding ways to disentangle the representational subspaces encoding the phonetic and speaker information, e.g., Liu et al (2023). However, much less attention has been dedicated to disentangling the information pertaining to the acoustic conditions (e.g., background noise and reverberation).…”
Section: Discussionmentioning
confidence: 99%
“…The research community is actively working on finding ways to disentangle the representational subspaces encoding the phonetic and speaker information, e.g., Liu et al (2023). However, much less attention has been dedicated to disentangling the information pertaining to the acoustic conditions (e.g., background noise and reverberation).…”
Section: Discussionmentioning
confidence: 99%