Interspeech 2014 2014
DOI: 10.21437/interspeech.2014-228
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating speech features with the minimal-pair ABX task (II): resistance to noise

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2016
2016
2025
2025

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 23 publications
(16 citation statements)
references
References 17 publications
0
16
0
Order By: Relevance
“…By highlighting how the gradient computation separates into two steps we derived an importancesampling estimate for the gradient that often only needs to evaluate the computationally cheaper part to provide the estimate. Skipping the computationally costly evaluation of the gradient of the model itself as often as possible lead to a practical speedup that is independent of other improvements provided by more advanced optimization algorithms (Johnson and Zhang, 2013;Schatz et al, 2014). Our method relies on the inverse transformation being unique and efficient to compute.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…By highlighting how the gradient computation separates into two steps we derived an importancesampling estimate for the gradient that often only needs to evaluate the computationally cheaper part to provide the estimate. Skipping the computationally costly evaluation of the gradient of the model itself as often as possible lead to a practical speedup that is independent of other improvements provided by more advanced optimization algorithms (Johnson and Zhang, 2013;Schatz et al, 2014). Our method relies on the inverse transformation being unique and efficient to compute.…”
Section: Discussionmentioning
confidence: 99%
“…In recent years several more advanced stochastic optimization algorithms have been proposed, such as stochastic average gradients (SAG) (Schmidt et al, 2017), stochastic variance reduced gradients (SVRG) (Johnson and Zhang, 2013), and SAGA that combines elements of both (Schatz et al, 2014). However, to our knowledge these techniques have not been successfully adapted for automatic variational inference.…”
Section: Stochastic Gradient Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…To test if the learned representations can separate phonetic categories, we use a minimal pair ABX discrimination task [20,21]. It only requires to define a dissimilarity function d between speech tokens, no external training algorithm is needed.…”
Section: Evaluation With Abx Tasksmentioning
confidence: 99%
“…The evaluation metrics defined by the Zero Resource Speech Challenge of Interspeech 2015 [20] were used in this work. For evaluating the discovered frame-level features, the minimum pair ABX task [43], [44] was used to measure the discriminability between two sound categories as in Track 1 of the Challenge. For evaluating the discovered tokens in Track 2 of the Challenge, a total of seven evaluation metrics were defined: Normalized Edit Distance (NED), Coverage, Matching F-score, Grouping F-score, Type F-score, Token Fscore and Boundary F-score [20].…”
Section: B Evaluation Metrics For Discovered Features and Tokensmentioning
confidence: 99%