Recommender systems are increasingly supporting explanations to increase trust in their recommendations. However, studies on explaining recommendations typically target adults in low-risk e-commerce or media contexts, and using explanations in e-learning has received little research attention. To address these limits, we investigated how explanations affect adolescents' trust in an exercise recommender on a mathematical e-learning platform. In a randomized controlled experiment with 37 adolescents, we compared real explanations with placebo and no explanations. Our results show that explanations can significantly increase initial trust when measured as a multidimensional construct of competence, benevolence, integrity, intention to return, and perceived transparency. Yet, as not all adolescents in our study attached equal importance to explanations, it remains important to tailor them. To study the impact of tailored explanations, we advise researchers to include placebo baselines in their studies as they may give more insights into how much transparency people actually need, compared to no-explanation baselines.
Explainable artificial intelligence is increasingly used in machine learning (ML) based decision-making systems in healthcare. However, little research has compared the utility of different explanation methods in guiding healthcare experts for patient care. Moreover, it is unclear how useful, understandable, actionable and trustworthy these methods are for healthcare experts, as they often require technical ML knowledge. This paper presents an explanation dashboard that predicts the risk of diabetes onset and explains those predictions with data-centric, feature-importance, and examplebased explanations. We designed an interactive dashboard to assist healthcare experts, such as nurses and physicians, in monitoring the risk of diabetes onset and recommending measures to minimize risk. We conducted a qualitative study with 11 healthcare experts and a mixed-methods study with 45 healthcare experts and 51 diabetic patients to compare the different explanation methods in our dashboard in terms of understandability, usefulness, actionability, and trust. Results indicate that our participants preferred our representation of data-centric explanations that provide local explanations with a global overview over other methods. Therefore, this paper highlights the importance of visually directive data-centric explanation method for assisting healthcare experts to gain actionable insights from patient health records. Furthermore, we share our design implications for tailoring the visual representation of different explanation methods for healthcare experts.
People's trust in prediction models can be affected by many factors, including domain expertise like knowledge about the application domain and experience with predictive modelling. However, to what extent and why domain expertise impacts people's trust is not entirely clear. In addition, accurately measuring people's trust remains challenging. We share our results and experiences of an exploratory pilot study in which four people experienced with predictive modelling systematically explore a visual analytics system with an unknown prediction model. Through a mixed-methods approach involving Likert-type questions and a semi-structured interview, we investigate how people's trust evolves during their exploration, and we distil six themes that affect their trust in the prediction model. Our results underline the multi-faceted nature of trust, and suggest that domain expertise alone cannot fully predict people's trust perceptions.
Gamification researchers deem adolescents a particularly interesting audience for tailored gamification. However, empirical validation of popular player typologies and personality trait models thus far has been limited to adults. As adolescents exhibit complex behaviours that differ from older adults, these models may need adaptation. To that end, we collected a unique data set of Big Five Inventory and Hexad questionnaire answers in Dutch from 402 adolescents. Confirmatory factor analysis showed that the Dutch forms of the BFI-10, BFI-44 and Hexad scales performed substandard when used with adolescents. Through exploratory factor analysis, we investigated underlying problems, and provide preliminary suggestions on how to improve measurements. In particular, we propose to simplify the Hexad model, and to reformulate specific items. With this study, we hope to contribute to the debate on how to improve the tailoring of interactive systems for adolescents. CCS CONCEPTS• Human-centered computing → HCI theory, concepts and models; Empirical studies in HCI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.