2007 15th International Conference on Digital Signal Processing 2007
DOI: 10.1109/icdsp.2007.4288650
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Study of Linear and Nonlinear Dimensionality Reduction for Speaker Identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…An affinity matrix was constructed with a normalized angle kernel, and eigenvectors were estimated via diffusion map embedding ( Figure 1A ), a nonlinear dimensionality reduction technique ( Coifman and Lafon, 2006 ) that projects connectome features into low-dimensional manifolds ( Margulies et al, 2016 ). This technique is only controlled by a few parameters, computationally efficient, and relatively robust to noise compared to other nonlinear techniques ( Errity and McKenna, 2007 ; Gallos et al, 2020 ; Hong et al, 2020 ; Tenenbaum et al, 2000 ), and has been extensively used in the previous gradient mapping literature ( Hong et al, 2019 ; Hong et al, 2020 ; Huntenburg et al, 2017 ; Larivière et al, 2020a ; Margulies et al, 2016 ; Müller et al, 2020 ; Paquola et al, 2019a ; Park et al, 2021b ; Valk et al, 2020 ; Vos de Wael et al, 2020a ). It is controlled by two parameters α and t , where α controls the influence of the density of sampling points on the manifold (α = 0, maximal influence; α = 1, no influence) and t controls the scale of eigenvalues of the diffusion operator.…”
Section: Methodsmentioning
confidence: 99%
“…An affinity matrix was constructed with a normalized angle kernel, and eigenvectors were estimated via diffusion map embedding ( Figure 1A ), a nonlinear dimensionality reduction technique ( Coifman and Lafon, 2006 ) that projects connectome features into low-dimensional manifolds ( Margulies et al, 2016 ). This technique is only controlled by a few parameters, computationally efficient, and relatively robust to noise compared to other nonlinear techniques ( Errity and McKenna, 2007 ; Gallos et al, 2020 ; Hong et al, 2020 ; Tenenbaum et al, 2000 ), and has been extensively used in the previous gradient mapping literature ( Hong et al, 2019 ; Hong et al, 2020 ; Huntenburg et al, 2017 ; Larivière et al, 2020a ; Margulies et al, 2016 ; Müller et al, 2020 ; Paquola et al, 2019a ; Park et al, 2021b ; Valk et al, 2020 ; Vos de Wael et al, 2020a ). It is controlled by two parameters α and t , where α controls the influence of the density of sampling points on the manifold (α = 0, maximal influence; α = 1, no influence) and t controls the scale of eigenvalues of the diffusion operator.…”
Section: Methodsmentioning
confidence: 99%
“…1A ), a non-linear dimensionality reduction technique (Coifman and Lafon, 2006) that projects connectome features into low dimensional manifolds (Margulies et al, 2016). This technique is only controlled by few parameters, computationally efficient, and relatively robust to noise compared to other non-linear techniques (Errity and McKenna, 2007; Gallos et al, 2020; Hong et al, 2020; Tenenbaum et al, 2000), and has been extensively used in the previous gradient mapping literature (Hong et al, 2019, 2020; Huntenburg et al, 2017; Larivière et al, 2020a; Margulies et al, 2016; Müller et al, 2020; Paquola et al, 2019a; Park et al, 2021; Sofie L. Valk et al, 2020; Vos de Wael et al, 2020). It is controlled by two parameters α and t , where α controls the influence of the density of sampling points on the manifold (α = 0, maximal influence; α = 1, no influence) and t controls the scale of eigenvalues of the diffusion operator.…”
Section: Methodsmentioning
confidence: 99%
“…These manifold learning algorithms have been successfully applied to a number of speech processing applications including low dimensional visualization of speech, analysis of speech [10] and speaker recognition [11]. However, there is very little research on these manifold learning algorithms applied for phoneme recognition tasks.…”
Section: Introductionmentioning
confidence: 99%