2017
DOI: 10.1007/978-3-319-66963-2_6
|View full text |Cite
|
Sign up to set email alerts
|

A Semi-supervised Speaker Identification Method for Audio Forensics Using Cochleagrams

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 17 publications
0
7
0
1
Order By: Relevance
“…Many computational systems were designed to understand: speech segregation based on the ASA principles, important applications like robust speech recognition [2], and hearing aid design [10]. Pitch and amplitude modulation are examples of Computational ASA (CASA) methods applied to detached spoken rations of additive communication and the predictable tones in nearby structures were congregated using tone steadiness [3]. Cheng et al [11] engaged speaker prototypes to implement a collaborative approximation of speaker personalities and consecutive category to group temporally disconnected time-frequency (T-F) sections.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Many computational systems were designed to understand: speech segregation based on the ASA principles, important applications like robust speech recognition [2], and hearing aid design [10]. Pitch and amplitude modulation are examples of Computational ASA (CASA) methods applied to detached spoken rations of additive communication and the predictable tones in nearby structures were congregated using tone steadiness [3]. Cheng et al [11] engaged speaker prototypes to implement a collaborative approximation of speaker personalities and consecutive category to group temporally disconnected time-frequency (T-F) sections.…”
Section: Related Workmentioning
confidence: 99%
“…Musical subdivisions are a drawback region with acoustic substance consideration, particularly in the instance where speech isolation is essential. The precision of speech identification and isolation could be improved through noise ejections from acoustic speech indicators [3]. A learningbased approach and non-learning-based approaches were being utilized by the present techniques of speech and music isolation [2].…”
Section: Introductionmentioning
confidence: 99%
“…Noise garbles speech and introduces obstacles in various applications, including automatic speech segregation. Noise removal from audio speech signals enhance the accuracy of speech recognition and segregation applications [ 2 ].…”
Section: Introductionmentioning
confidence: 99%
“…Learning-based methods are employed more frequently than non-learning-based methods because of their potential for segregating speech and music components more effectively in the presence of background noise. Lekshmil and Sathidevi [ 1 ] proposed non-learning-based speech segregation models for single-channel speech separation using short-time Fourier transform (STFT) [ 2 ]. They use pitch information-based techniques for the segregation process.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation