Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control.
Cardiovascular diseases currently pose the highest threat to human health around the world. Proper investigation of the abnormalities in heart sounds is known to provide vital clinical information that can assist in the diagnosis and management of cardiac conditions. However, despite significant advances in the development of algorithms for automated classification and analysis of heart sounds, the validity of different approaches has not been systematically reviewed. This paper provides an in-depth systematic review and critical analysis of all the existing approaches for automatic identification and classification of the heart sounds. All statements on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2009 Checklist were followed and addressed thoroughly to maintain the quality of the accounted systematic review. Out of 1347 research articles available in the academic databases from 1963 to 2018, 117 peerreviewed articles were found to fall under the search and selection criteria of this paper. Amongst them: 53 articles are focused on segmentation, 72 of the studies are related to the feature extraction approaches and 88 to classification, and 56 reported on the databases and heart sounds acquisition. From this review, it is clear that, although a lot of research has been done in the field of automated analysis, there is still some work to be done to develop robust methods for identification and classification of various events in the cardiac cycle so that this could be effectively used to improve the diagnosis and management of cardiovascular diseases in combination with the wearable mobile technologies.
Designing wearable systems for sleep detection and staging is extremely challenging due to the numerous constraints associated with sensing, usability, accuracy, and regulatory requirements. Several researchers have explored the use of signals from a subset of sensors that are used in polysomnography (PSG), whereas others have demonstrated the feasibility of using alternative sensing modalities. In this paper, a systematic review of the different sensing modalities that have been used for wearable sleep staging is presented. Based on a review of 90 papers, 13 different sensing modalities are identified. Each sensing modality is explored to identify signals that can be obtained from it, the sleep stages that can be reliably identified, the classification accuracy of systems and methods using the sensing modality, as well as the usability constraints of the sensor in a wearable system. It concludes that the two most common sensing modalities in use are those based on electroencephalography (EEG) and photoplethysmography (PPG). EEG-based systems are the most accurate, with EEG being the only sensing modality capable of identifying all the stages of sleep. PPG-based systems are much simpler to use and better suited for wearable monitoring but are unable to identify all the sleep stages.
Cough is a common symptom that manifests in numerous respiratory diseases. In chronic respiratory diseases, such as asthma and COPD, monitoring of cough is an integral part in managing the disease. This paper presents an algorithm for automatic detection of cough events from acoustic signals.The algorithm uses only three spectral features with a logistic regression model to separate sound segments into cough and non-cough events. The spectral features were derived using simple calculation from two frequency bands of the sound spectrum. The frequency bands of interest were chosen based on its characteristics in the spectrum. The algorithm achieved high sensitivity of 90.31%, specificity of 98.14%, and F1-score of 88.70%. Its low-complexity and high detection performance demonstrate its potential for use in remote patient monitoring systems for real-time, automatic cough detection.
The push towards low-power and wearable sleep systems requires using minimum number of recording channels to enhance battery life, keep processing load small and be more comfortable for the user. Since most sleep stages can be identified using EEG traces, enormous power savings could be achieved by using a single channel of EEG. However, detection of REM sleep from one channel EEG is challenging due to its electroencephalographic similarities with N1 and Wake stages. In this paper we investigate a novel feature in sleep EEG that demonstrates high discriminatory ability for detecting REM phases. We then use this feature, that is based on spectral edge frequency (SEF) in the 8–16 Hz frequency band, together with the absolute power and the relative power of the signal, to develop a simple REM detection algorithm. We evaluate the performance of this proposed algorithm with overnight single channel EEG recordings of 5 training and 15 independent test subjects. Our algorithm achieved sensitivity of 83%, specificity of 89% and selectivity of 61% on a test database consisting of 2221 REM epochs. It also achieved sensitivity and selectivity of 81 and 75% on PhysioNet Sleep-EDF database consisting of 8 subjects. These results demonstrate that SEF can be a useful feature for automatic detection of REM stages of sleep from a single channel EEG.Electronic supplementary materialThe online version of this article (doi:10.1007/s10439-014-1085-6) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.