2017
DOI: 10.1109/tifs.2017.2678458
|View full text |Cite
|
Sign up to set email alerts
|

Speaker Identification Using Discriminative Features and Sparse Representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 52 publications
0
10
0
Order By: Relevance
“…Then, the trained model is used to convert the noisy speech signals into the clean speech signals. Notable machinelearning-based SE methods include compressive sensing [26], sparse coding [27], [28], non-negative matrix factorization [29], and robust principal component analysis [30], [31].…”
Section: Introductionmentioning
confidence: 99%
“…Then, the trained model is used to convert the noisy speech signals into the clean speech signals. Notable machinelearning-based SE methods include compressive sensing [26], sparse coding [27], [28], non-negative matrix factorization [29], and robust principal component analysis [30], [31].…”
Section: Introductionmentioning
confidence: 99%
“…SE methods in the second class are based on machinelearning algorithms; these methods typically prepare a model for noisy-to-clean transformation in a data-driven manner. Notable SE methods belonging to this class include hidden Markov models [35], non-negative matrix factorization [36]- [38], compressive sensing [39], sparse coding [40], and robust principal component analysis [41]. In addition, artificial neural networks (ANNs), as a successful machine-learning model, have been used for SE because of their powerful nonlinear transformation capability.…”
Section: Introductionmentioning
confidence: 99%
“…The prepared model is used to transform noisy speech signals to clean speech signals. Well-known machine learning-based models include non-negative matrix factorization [21], [22], [23], compressive sensing [24], sparse coding [25], [26], and robust principal component analysis (RPCA) [27]. Deep learning models have drawn great interest due to their outstanding nonlinear mapping capabilities.…”
Section: Introductionmentioning
confidence: 99%