Background: While approximately 70% of chronic low back pain (CLBP) sufferers complain of sleep disturbance, current literature is based on self report measures which can be prone to bias and no objective data of sleep quality, based exclusively on CLBP are available. In accordance with the recommendations of The American Sleep Academy, when measuring sleep, both subjective and objective assessments should be considered as the two are only modestly correlated, suggesting that each modality assesses different aspects of an individual's sleep experience. Therefore, the purpose of this study was to expand previous research into sleep disturbance in CLBP by comparing objective and subjective sleep quality in participants with CLBP and healthy age and gender matched controls, to identify correlates of poor sleep and to test logistics and gather information prior to a larger study.
SUMMARYWe studied a novel non-contact biomotion sensor, which has been developed for identifying sleep ⁄ wake patterns in adult humans. The biomotion sensor uses ultra lowpower reflected radiofrequency waves to determine the movement of a subject during sleep. An automated classification algorithm has been developed to recognize sleep ⁄ wake states on a 30-s epoch basis based on the measured movement signal. The sensor and software were evaluated against gold-standard polysomnography on a database of 113 subjects [94 male, 19 female, age 53 ± 13 years, apnoea-hypopnea index (AHI) 22 ± 24] being assessed for sleep-disordered breathing at a hospital-based sleep laboratory. The overall per-subject accuracy was 78%, with a Cohen's kappa of 0.38. Lower accuracy was seen in a high AHI group (AHI >15, 63 subjects) than in a low AHI group (74.8% versus 81.3%); however, most of the change in accuracy can be explained by the lower sleep efficiency of the high AHI group. Averaged across subjects, the overall sleep sensitivity was 87.3% and the wake sensitivity was 50.1%. The automated algorithm slightly overestimated sleep efficiency (bias of +4.8%) and total sleep time (TST; bias of +19 min on an average TST of 288 min). We conclude that the non-contact biomotion sensor can provide a valid means of measuring sleep-wake patterns in this patient population, and also allows direct visualization of respiratory movement signals.
We evaluate a contact-less continuous measuring system measuring respiration and activity patterns system for identifying sleep/wake patterns in adult humans. The system is based on the use of a novel non-contact biomotion sensor, and an automated signal analysis and classification system. The sleep/wake detection algorithm combines information from respiratory frequency, magnitude, and movement to assign 30 s epochs to either wake or sleep. Comparison to a standard polysomnogram system utilizing manual sleep stage classification indicates excellent results. It has been validated on overnight studies from 12 subjects. Wake state was correctly identified 69% and sleep with 88%. Due to its ease-of-use and good performance, the device is an excellent tool for long term monitoring of sleep patterns in the home environment in an ultraconvenient fashion.
Background: Obstructive sleep apnea (OSA) has a high prevalence, with an estimated 425 million adults with apnea hypopnea index (AHI) of ≥15 events/hour, and is significantly underdiagnosed. This presents a significant pain point for both the sufferers, and for healthcare systems, particularly in a post COVID-19 pandemic world. As such, it presents an opportunity for new technologies that can enable screening in both developing and developed countries. In this work, the performance of a non-contact OSA screener App that can run on both Apple and Android smartphones is presented. Methods: The subtle breathing patterns of a person in bed can be measured via a smartphone using the "Firefly" app technology platform [and underpinning software development kit (SDK)], which utilizes advanced digital signal processing (DSP) technology and artificial intelligence (AI) algorithms to identify detailed sleep stages, respiration rate, snoring, and OSA patterns. The smartphone is simply placed adjacent to the subject, such as on a bedside table, night stand or shelf, during the sleep session. The system was trained on a set of 128 overnights recorded at a sleep laboratory, where volunteers underwent simultaneous full polysomnography (PSG), and "Firefly" smartphone app analysis. A separate independent test set of 120 recordings was collected across a range of Apple iOS and Android smartphones, and withheld for performance evaluation by a different team. An operating point tuned for mid-sensitivity (i.e., balancing sensitivity and specificity) was chosen for the screener. Results:The performance on the test set is comparable to ambulatory OSA screeners, and other smartphone screening apps, with a sensitivity of 88.3% and specificity of 80.0% [with receiver operating characteristic (ROC) area under the curve (AUC) of 0.92], for a clinical threshold for the AHI of ≥15 events/ hour of detected sleep time. Conclusions: The "Firefly" app based sensing technology offers the potential to significantly lower the barrier of entry to OSA screening, as no hardware (other than the user's personal smartphone) is required.Additionally, multi-night analysis is possible in the home environment, without requiring the wearing of a portable PSG or other home sleep test (HST).
Abstract-Information about person identity is multimodal. Yet, most person-recognition systems limit themselves to only a single modality, such as facial appearance. With a view to exploiting the complementary nature of different modes of information and increasing pattern recognition robustness to test signal degradation, we developed a multiple expert biometric person identification system that combines information from three experts: audio, visual speech, and face. The system uses multimodal fusion in an automatic unsupervised manner, adapting to the local performance (at the transaction level) and output reliability of each of the three experts. The expert weightings are chosen automatically such that the reliability measure of the combined scores is maximized. To test system robustness to train/test mismatch, we used a broad range of acoustic babble noise and JPEG compression to degrade the audio and visual signals, respectively. Identification experiments were carried out on a 248-subject subset of the XM2VTS database. The multimodal expert system outperformed each of the single experts in all comparisons. At severe audio and visual mismatch levels tested, the audio, mouth, face, and tri-expert fusion accuracies were 16.1%, 48%, 75%, and 89.9%, respectively, representing a relative improvement of 19.9% over the best performing expert.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.