Interspeech 2021 2021
DOI: 10.21437/interspeech.2021-1027
|View full text |Cite
|
Sign up to set email alerts
|

Conditional Independence for Pretext Task Selection in Self-Supervised Speech Representation Learning

Abstract: Through solving pretext tasks, self-supervised learning (SSL) leverages unlabeled data to extract useful latent representations replacing traditional input features in the downstream task. A common pretext task consists in pretraining a SSL model on pseudo-labels derived from the original signal. This technique is particularly relevant for speech data where various meaningful signal processing features may serve as pseudolabels. However, the process of selecting pseudo-labels, for speech or other types of data… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 59 publications
0
2
0
Order By: Relevance
“…These SSL embedding systems have been comprehensively benchmarked, considering factors such as model parameters and accuracy across various tasks [39]. For accessibility, pre-trained Automatic Speech Recognition (ASR) embedding models are readily available through the Hugging Face repository [40].…”
Section: Related Workmentioning
confidence: 99%
“…These SSL embedding systems have been comprehensively benchmarked, considering factors such as model parameters and accuracy across various tasks [39]. For accessibility, pre-trained Automatic Speech Recognition (ASR) embedding models are readily available through the Hugging Face repository [40].…”
Section: Related Workmentioning
confidence: 99%
“…The principal issue with conditional independence is the difficulty of computing good estimates of how much two variables are independent given a third one on realistic data [59]. In a previous work [73], we proposed a simple way to get an estimation of the conditional independence. This method has proven effective for individual pretext task selection, as the utility estimator correlates highly with the final downstream performances.…”
Section: Conditional Independence Estimator Computationmentioning
confidence: 99%