RATIONALE Cough is one of the most frequently encountered symptoms for many physicians. However, it is difficult to objectively measure cough in real time. We developed an artificial intelligence (AI) algorithm-based on a smartphone application that measures cough sound in real time. We did a preliminary analysis to evaluate the performance of this system. METHODS We recruited 53 participants who visited outpatient clinic for sub-acute or chronic cough at 8 academic medical centers in Korea. The participants were asked to record 1-3 hours of ambient sounds during daytime and at least 5 hours during nighttime sleep for 2 days using smartphone. In addition, visual analogue scales (VAS) for cough were measured at the time of enrollment. The recorded files were analyzed independently by two trained researchers to count the number of coughs. The number of coughs by the researchers was compared with the number of coughs measured using an AI algorithm. Two deep learning algorithms were developed for this, one for analyzing daytime ambient sounds and the other for nighttime sleep sounds. The deep learning algorithm counted the number of coughs 3 times from the same data and the average error rate was obtained. RESULTS There were 37 (69.8%) females and 16 (30.2%) males. About majority (73.6%) of the patients were less than 50 years old. The mean VAS score was 54.3 ± 21.4. From 255.04 hours of daytime recordings and 614.56 hours of nighttime sleep recordings, 15,050 daytime coughs and 3,442 nighttime sleep coughs were collected. The cough frequency was median 34.2 (0 to 433.7) and 1.6 (0 to 58.3) during days and nights, respectively. The AI algorithm analyzed test sets including manually counted 2,941 daytime coughs and 684 nighttime coughs and AI algorithm counted 2,998 and 730 in average. The average error rate was calculated as 6.0% and 9.1%, respectively, which was better than expected error rate of 10%. CONCLUSION Our AI algorithm could monitor cough sounds in real time with an accuracy of more than 90%. Further development and external validation with larger participants would be conducted to guarantee reliability and robustness in daily clinical and home setting.
In this paper, we propose an adaptive multi-view video service framework suitable for mobile environments. The proposed framework generates intermediate views in near-realtime and overcomes the limitations of mobile services by adapting the multi-view video according to the processing capability of a mobile device as well as the user characteristics of a client. By implementing the most of adaptation processes at the server side, the load on a client can be reduced. H.264/AVC is adopted as a compression scheme. The framework could provide an interactive service with efficient video service to a mobile client. For this, we present a multi-view video DIA (Digital Item Adaptation) that adapts the multi-view video according to the MPEG-21 DIA multimedia framework. Experimental results show that our proposed system can support a frame rate of 13 fps for 320x240 video and reduce the time of generating an intermediate view by 20 % compared with a conventional 3D projection method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.