ObjectiveAlthough awareness of sleep disorders is increasing, limited information is available on whole night detection of snoring. Our study aimed to develop and validate a robust, high performance, and sensitive whole-night snore detector based on non-contact technology.DesignSounds during polysomnography (PSG) were recorded using a directional condenser microphone placed 1 m above the bed. An AdaBoost classifier was trained and validated on manually labeled snoring and non-snoring acoustic events.PatientsSixty-seven subjects (age 52.5±13.5 years, BMI 30.8±4.7 kg/m2, m/f 40/27) referred for PSG for obstructive sleep apnea diagnoses were prospectively and consecutively recruited. Twenty-five subjects were used for the design study; the validation study was blindly performed on the remaining forty-two subjects.Measurements and ResultsTo train the proposed sound detector, >76,600 acoustic episodes collected in the design study were manually classified by three scorers into snore and non-snore episodes (e.g., bedding noise, coughing, environmental). A feature selection process was applied to select the most discriminative features extracted from time and spectral domains. The average snore/non-snore detection rate (accuracy) for the design group was 98.4% based on a ten-fold cross-validation technique. When tested on the validation group, the average detection rate was 98.2% with sensitivity of 98.0% (snore as a snore) and specificity of 98.3% (noise as noise).ConclusionsAudio-based features extracted from time and spectral domains can accurately discriminate between snore and non-snore acoustic events. This audio analysis approach enables detection and analysis of snoring sounds from a full night in order to produce quantified measures for objective follow-up of patients.
Sleep staging is essential for evaluating sleep and its disorders. Most sleep studies today incorporate contact sensors that may interfere with natural sleep and may bias results. Moreover, the availability of sleep studies is limited, and many people with sleep disorders remain undiagnosed. Here, we present a pioneering approach for rapid eye movement (REM), non-REM, and wake staging (macro-sleep stages, MSS) estimation based on sleep sounds analysis. Our working hypothesis is that the properties of sleep sounds, such as breathing and movement, within each MSS are different. We recorded audio signals, using non-contact microphones, of 250 patients referred to a polysomnography (PSG) study in a sleep laboratory. We trained an ensemble of one-layer, feedforward neural network classifiers fed by time-series of sleep sounds to produce real-time and offline analyses. The audio-based system was validated and produced an epoch-by-epoch (standard 30-sec segments) agreement with PSG of 87% with Cohen’s kappa of 0.7. This study shows the potential of audio signal analysis as a simple, convenient, and reliable MSS estimation without contact sensors.
Study Objectives: Sound level meter is the gold standard approach for snoring evaluation. Using this approach, it was established that snoring intensity (in dB) is higher for men and is associated with increased apnea-hypopnea index (AHI). In this study, we performed a systematic analysis of breathing and snoring sound characteristics using an algorithm designed to detect and analyze breathing and snoring sounds. The effect of sex, sleep stages, and AHI on snoring characteristics was explored. Methods: We consecutively recruited 121 subjects referred for diagnosis of obstructive sleep apnea. A whole night audio signal was recorded using noncontact ambient microphone during polysomnography. A large number (> 290,000) of breathing and snoring (> 50 dB) events were analyzed. Breathing sound events were detected using a signal-processing algorithm that discriminates between breathing and nonbreathing (noise events) sounds. Results: Snoring index (events/h, SI) was 23% higher for men (p = 0.04), and in both sexes SI gradually declined by 50% across sleep time (p < 0.01) independent of AHI. SI was higher in slow wave sleep (p < 0.03) compared to S2 and rapid eye movement sleep; men have higher SI in all sleep stages than women (p < 0.05). Snoring intensity was similar in both genders in all sleep stages and independent of AHI. For both sexes, no correlation was found between AHI and snoring intensity (r = 0.1, p = 0.291).Conclusions: This audio analysis approach enables systematic detection and analysis of breathing and snoring sounds from a full night recording. Snoring intensity is similar in both sexes and was not affected by AHI.
Study ObjectivesTo develop and validate a novel non-contact system for whole-night sleep evaluation using breathing sounds analysis (BSA).DesignWhole-night breathing sounds (using ambient microphone) and polysomnography (PSG) were simultaneously collected at a sleep laboratory (mean recording time 7.1 hours). A set of acoustic features quantifying breathing pattern were developed to distinguish between sleep and wake epochs (30 sec segments). Epochs (n = 59,108 design study and n = 68,560 validation study) were classified using AdaBoost classifier and validated epoch-by-epoch for sensitivity, specificity, positive and negative predictive values, accuracy, and Cohen's kappa. Sleep quality parameters were calculated based on the sleep/wake classifications and compared with PSG for validity.SettingUniversity affiliated sleep-wake disorder center and biomedical signal processing laboratory.PatientsOne hundred and fifty patients (age 54.0±14.8 years, BMI 31.6±5.5 kg/m2, m/f 97/53) referred for PSG were prospectively and consecutively recruited. The system was trained (design study) on 80 subjects; validation study was blindly performed on the additional 70 subjects.Measurements and ResultsEpoch-by-epoch accuracy rate for the validation study was 83.3% with sensitivity of 92.2% (sleep as sleep), specificity of 56.6% (awake as awake), and Cohen's kappa of 0.508. Comparing sleep quality parameters of BSA and PSG demonstrate average error of sleep latency, total sleep time, wake after sleep onset, and sleep efficiency of 16.6 min, 35.8 min, and 29.6 min, and 8%, respectively.ConclusionsThis study provides evidence that sleep-wake activity and sleep quality parameters can be reliably estimated solely using breathing sound analysis. This study highlights the potential of this innovative approach to measure sleep in research and clinical circumstances.
In this work, a novel system (method) for sleep quality analysis is proposed. Its purpose is to assist an alternative non-contact method for detecting and diagnosing sleep related disorders based on acoustic signal processing. In this work, audio signals of 145 patients with obstructive sleep apnea were recorded (more than 1000 hours) in a sleep laboratory and analyzed. The method is based on the assumption that during sleep the respiratory efforts are more periodically patterned and consistent relative to a waking state; furthermore, the sound intensity of those efforts is higher, making the pattern more noticeable relative to the background noise level. The system was trained on 50 subjects and validated on 95 subjects. The system accuracy for detecting sleep/wake state is 82.1% (epoch by epoch), resulting in 3.9% error (difference) in detecting sleep latency, 11.4% error in estimating total sleep time, and 11.4% error in estimating sleep efficiency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.