Interspeech 2021 2021
DOI: 10.21437/interspeech.2021-1066
|View full text |Cite
|
Sign up to set email alerts
|

Dropout Regularization for Self-Supervised Learning of Transformer Encoder Speech Representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…This work [13], leveraged a part of the Pan-Cancer dataset to pre-train convolutional neural networks (CNNs) for predicting survival rates in lung cancer. The researchers tackled the lack of structure in gene expression data by reformatting RNA-seq samples into gene expression images, which enabled the extraction of high-level features through CNNs.…”
Section: Related Workmentioning
confidence: 99%
“…This work [13], leveraged a part of the Pan-Cancer dataset to pre-train convolutional neural networks (CNNs) for predicting survival rates in lung cancer. The researchers tackled the lack of structure in gene expression data by reformatting RNA-seq samples into gene expression images, which enabled the extraction of high-level features through CNNs.…”
Section: Related Workmentioning
confidence: 99%
“…DeCoAR 2.0 [110] uses vector quantization, which is shown to improve the learned representations. Furthermore, two dropout regularization methodsattention dropout and layer dropout-are introduced with the TERA model [106], [111]. Both methods are variations on the original dropout method [112].…”
Section: B Generative Approaches 1) Motivationmentioning
confidence: 99%
“…DeCoAR 2.0 [92] uses vector quantization, which is shown to improve the learned representations. Furthermore, two dropout regularization methods-attention dropout and layer dropout-are introduced with the TERA model [93].…”
Section: U L T I -T a R G E T A P C B E S T -R Qmentioning
confidence: 99%