2020
DOI: 10.1007/978-981-15-9129-7_29
|View full text |Cite
|
Sign up to set email alerts
|

Detection of Various Speech Forgery Operations Based on Recurrent Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 14 publications
0
1
0
Order By: Relevance
“…With the development of fake audio detection, many kinds of acoustic features have been proposed to improve detection performances. Linear frequency cepstral coefficient (LFCC) and Mel-frequency cepstral coefficient (MFCC) are two of the most used acoustic features for the detection of fake speech [18][19][20]. In this paper, we extract LFCC and MFCC features from the audio signals as the input data of the ASLNet.…”
Section: Acoustic Feature and Binary Ground Truth Maskmentioning
confidence: 99%
“…With the development of fake audio detection, many kinds of acoustic features have been proposed to improve detection performances. Linear frequency cepstral coefficient (LFCC) and Mel-frequency cepstral coefficient (MFCC) are two of the most used acoustic features for the detection of fake speech [18][19][20]. In this paper, we extract LFCC and MFCC features from the audio signals as the input data of the ASLNet.…”
Section: Acoustic Feature and Binary Ground Truth Maskmentioning
confidence: 99%