2020 12th International Conference on Computational Intelligence and Communication Networks (CICN) 2020
DOI: 10.1109/cicn49253.2020.9242583
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Audio Spectrograms Processing to the Early COVID-19 Detection

Abstract: The objective of the paper is to provide a model capable of serving as a basis for retraining a convolutional neural network that can be used to detect COVID-19 cases through spectrograms of coughing, sneezing and other respiratory sounds from infected people. To address this challenge, the methodology was focused on Deep Learning technics worked with a dataset of sounds of sick and non-sick people, and using ImageNet's Xception architecture to train the model to be presented through Fine-Tuning. The results o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(12 citation statements)
references
References 3 publications
0
12
0
Order By: Relevance
“…The type of data collected and stored as datasets were as follows: cough recordings (eg, recording for 3 s) [14] , [15] , [17] , [23] , [25] [27] cough, breathing and speech recording (reciting a sentence) [16] , [19] , [22] , [27] [30] , speech collection [11] , [20] , [21] , [24] and sound samples (eg, recording using certain sounds) [18] . Moreover, some studies also collected a questionnaire based on a medical history or COVID-19 symptoms (eg, underlying conditions, age, sex, temperature) [15] , [17] .…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The type of data collected and stored as datasets were as follows: cough recordings (eg, recording for 3 s) [14] , [15] , [17] , [23] , [25] [27] cough, breathing and speech recording (reciting a sentence) [16] , [19] , [22] , [27] [30] , speech collection [11] , [20] , [21] , [24] and sound samples (eg, recording using certain sounds) [18] . Moreover, some studies also collected a questionnaire based on a medical history or COVID-19 symptoms (eg, underlying conditions, age, sex, temperature) [15] , [17] .…”
Section: Resultsmentioning
confidence: 99%
“…10 of the studies used features like MFCCs [11] , [13] – [18] , [25] [30] while 19 studies included CNN [11] , [13] – [16] , [18] , [21] [30] to extract features from the datasets. The data size was between 1000 and 9999 in half of the studies (n=11) [14] , [16] , [19] , [22] , [25] [27] , [28] [30] whereas seven of the studies had the sample size greater than 10000 [15] , [18] , [20] , [21] , [28] . The Supplementary Material includes the datasets of all the examined studies.…”
Section: Resultsmentioning
confidence: 99%
“…The COVID-19 detection method was created by Rodriguez et al [ 115 ] using spectrograms of coughing, sneezing, and other respiratory sounds. The dataset contains two labels, sick and not sick, gathered by the pharmaceutical manufacturer Pfizer in the United States.…”
Section: Deep Learningmentioning
confidence: 99%
“…Typical Covid-19 symptoms such has wet and dry coughs, croup, pertussis and bronchitis coughs have been considered when acquiring undetermined coughs. A similar approach has been adopted on the Pfizer dataset of “Sick Sound” [30] , in this case audio signals were converted into images of Spectrograms by using a Short Time Fourier Transform and the resulting images were fed into a Xception deep neural network [31] . A final accuracy of 75% was achieved with no inter-patient separation scheme: the result is consistent with those already found in literature (e.g.…”
Section: State Of the Art Reviewmentioning
confidence: 99%