A b s t r a c tCough is the most common symptom of many respiratory diseases. Currently, no standardized methods exist for objective monitoring of cough, which could be commercially available and clinically acceptable. Our aim is to develop an algorithm which will be capable, according to the sound events analysis, to perform objective ambulatory and automated monitoring of frequency of cough. Because speech is the most common sound in 24-hour recordings, the first step for developing this algorithm is to distinguish between cough sound and speech. For this purpose we obtained recordings from 20 healthy volunteers. All subjects performed continuous reading of the text from the book with voluntary coughs at the indicated instants. The obtained sounds were analyzed using by linear and non-linear analysis in the time and frequency domain. We used the classification tree for the distinction between cough sound and speech. The median sensitivity was 100% and the median specificity was 95%. In the next step we enlarged the analyzed sound events. Apart from cough sounds and speech the analyzed sounds were induced sneezing, voluntary throat and nasopharynx clearing, voluntary forced ventilation, laughing, voluntary snoring, eructation, nasal blowing and loud swallowing. The sound events were obtained from 32 healthy volunteers and for their analysis and classification we used the same algorithm as in previous study. The median sensitivity was 86% and median specificity was 91%. In the final step, we tested the effectiveness of our developed algorithm for distinction between cough and non-cough sounds produced during normal daily activities in patients suffering from respiratory diseases. Our study group consisted from 9 patients suffering from respiratory diseases. The recording time was 5 hours. The number of coughs counted by our algorithm was compared with manual cough counts done by two skilled co-workers. We have found that the number of cough analyzed by our algorithm and manual counting, as well, were disproportionately different. For that reason we have used another methods for the distinction of cough sound from non-cough sounds. We have compared the classification tree and artificial neural networks. Median sensitivity was increasing from 28% (classification tree) to 82% (artificial neural network), while the median specificity was not changed significantly. We have enlarged our characteristic parameters of the Mel frequency cepstral coefficients, the weighted Euclidean distance and the first and second derivative in time. Likewise the modification of classification algorithm is under our interest.
No abstract
SlA Imitc difference model of the human is used to analy/c thc exeilation process in the öl dcncrvatcd skeletal muscles vvhich are neu \ suilace eleclrodes. The MATLAB ivscaivh lool S-HhLD was developed to simulate the siahiinaiA fickl for real human 3D geometry and FES \\Al.YSl· for a tust eslimate of superthreshold K'L'ions Aclion potenlial initiation is simulated with a muscle hber model of the Hodgkin Huxley type and u iih a gcnerali/ed form of the activating function. \{ ihc cndmgs of a targct fiber llie activating function is ptoporiional to the first derivative of the extracellular \oltage, whereas the second derivative is the driving clcment for the cenlral pari. The analysis demonstrates ihat ü is generally easier to initiale action potentials at thc fibcr cndmgs especially for fibers in deeper regions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.