Background Health care and well-being are 2 main interconnected application areas of conversational agents (CAs). There is a significant increase in research, development, and commercial implementations in this area. In parallel to the increasing interest, new challenges in designing and evaluating CAs have emerged. Objective This study aims to identify key design, development, and evaluation challenges of CAs in health care and well-being research. The focus is on the very recent projects with their emerging challenges. Methods A review study was conducted with 17 invited studies, most of which were presented at the ACM (Association for Computing Machinery) CHI 2020 conference workshop on CAs for health and well-being. Eligibility criteria required the studies to involve a CA applied to a health or well-being project (ongoing or recently finished). The participating studies were asked to report on their projects’ design and evaluation challenges. We used thematic analysis to review the studies. Results The findings include a range of topics from primary care to caring for older adults to health coaching. We identified 4 major themes: (1) Domain Information and Integration, (2) User-System Interaction and Partnership, (3) Evaluation, and (4) Conversational Competence. Conclusions CAs proved their worth during the pandemic as health screening tools, and are expected to stay to further support various health care domains, especially personal health care. Growth in investment in CAs also shows the value as a personal assistant. Our study shows that while some challenges are shared with other CA application areas, safety and privacy remain the major challenges in the health care and well-being domains. An increased level of collaboration across different institutions and entities may be a promising direction to address some of the major challenges that otherwise would be too complex to be addressed by the projects with their limited scope and budget.
While the assessment of hearing aid use has traditionally relied on subjective self-reported measures, smartphone-connected hearing aids enable objective data logging from a large number of users. Objective data logging allows to overcome the inaccuracy of self-reported measures. Moreover, data logging enables assessing hearing aid use with a greater temporal resolution and longitudinally, making it possible to investigate hourly patterns of use and to account for the day-to-day variability. This study aims to explore patterns of hearing aid use throughout the day and assess whether clusters of users with similar use patterns can be identified. We did so by analyzing objective hearing aid use data logged from 15,905 real-world users over a 4-month period. Firstly, we investigated the daily amount of hearing aid use and its within-user and between-user variability. We found that users, on average, used the hearing aids for 10.01 h/day, exhibiting a substantial between-user (SD = 2.76 h) and within-user (SD = 3.88 h) variability. Secondly, we examined hearing aid use hourly patterns by clustering 453,612 logged days into typical days of hearing aid use. We identified three typical days of hearing aid use: full day (44% of days), afternoon (27%), and sporadic evening (26%) day of hearing aid use. Thirdly, we explored the usage patterns of the hearing aid users by clustering the users based on the proportion of time spent in each of the typical days of hearing aid use. We found three distinct user groups, each characterized by a predominant (i.e., experienced ~60% of the time) typical day of hearing aid use. Notably, the largest user group (49%) of users predominantly had full days of hearing aid use. Finally, we validated the user clustering by training a supervised classification ensemble to predict the cluster to which each user belonged. The high accuracy achieved by the supervised classifier ensemble (~86%) indicated valid user clustering and showed that such a classifier can be successfully used to group new hearing aid users in the future. This study provides a deeper insight into the adoption of hearing care treatments and paves the way for more personalized solutions.
Despite having individual perceptual preferences toward sounds, hearing aid users often end up with default hearing aid settings that have no contextual awareness. However, the introduction of smartphone-connected hearing aids has enabled a rethinking of hearing aids as user-adaptive systems considering both individual and contextual differences. In this study, we aimed to investigate the feasibility of such context-aware system for providing hearing aid users with a number of relevant hearing aid settings to choose from. During normal real-world hearing aid usage, we applied a smartphone-based method for capturing participants’ listening experience and audiological preference for different intervention levels of three audiological parameters (Noise Reduction, Brightness, Soft Gain). Concurrently, we collected contextual data as both self-reports (listening environment and listening intention) and continuous data logging of the acoustic environment (sound pressure level, signal-to-noise ratio). First, we found that having access to different intervention levels of the Brightness and Soft Gain parameters affected listening satisfaction. Second, for all three audiological parameters, the perceived usefulness of having access to different intervention levels was significantly modulated by context. Third, contextual data improved the prediction of both explicit and implicit intervention level preferences. Our findings highlight that context has a significant impact on hearing aid preferences across participants and that contextual data logging can help reduce the space of potential interventions in a user-adaptive system so that the most useful and preferred settings can be offered. Moreover, the proposed mixed-effects model is suitable for capturing predictions on an individual level and could also be expanded to predictions on a group level by including relevant user features.
Background Listening programs enable hearing aid (HA) users to change device settings for specific listening situations and thereby personalize their listening experience. However, investigations into real-world use of such listening programs to support clinical decisions and evaluate the success of HA treatment are lacking. Objective We aimed to investigate the provision of listening programs among a large group of in-market HA users and the context in which the programs are typically used. Methods First, we analyzed how many and which programs were provided to 32,336 in-market HA users. Second, we explored 332,271 program selections from 1312 selected users to investigate the sound environments in which specific programs were used and whether such environments reflect the listening intent conveyed by the name of the used program. Our analysis was based on real-world longitudinal data logged by smartphone-connected HAs. Results In our sample, 57.71% (18,663/32,336) of the HA users had programs for specific listening situations, which is a higher proportion than previously reported, most likely because of the inclusion criteria. On the basis of association rule mining, we identified a primary additional listening program, Speech in Noise, which is frequent among users and often provided when other additional programs are also provided. We also identified 2 secondary additional programs (Comfort and Music), which are frequent among users who get ≥3 programs and usually provided in combination with Speech in Noise. In addition, 2 programs (TV and Remote Mic) were related to the use of external accessories and not found to be associated with other programs. On average, users selected Speech in Noise, Comfort, and Music in louder, noisier, and less-modulated (all P<.01) environments compared with the environment in which they selected the default program, General. The difference from the sound environment in which they selected General was significantly larger in the minutes following program selection than in the minutes preceding it. Conclusions This study provides a deeper insight into the provision of listening programs on a large scale and demonstrates that additional listening programs are used as intended and according to the sound environment conveyed by the program name.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.