AI-assisted decision-making that impacts individuals raises critical questions about transparency and fairness in artificial intelligence (AI). Much research has highlighted the reciprocal relationships between the transparency/explanation and fairness in AI-assisted decision-making. Thus, considering their impact on user trust or perceived fairness simultaneously benefits responsible use of socio-technical AI systems, but currently receives little attention. In this paper, we investigate the effects of AI explanations and fairness on human-AI trust and perceived fairness, respectively, in specific AI-based decision-making scenarios. A user study simulating AI-assisted decision-making in two health insurance and medical treatment decision-making scenarios provided important insights. Due to the global pandemic and restrictions thereof, the user studies were conducted as online surveys. From the participant’s trust perspective, fairness was found to affect user trust only under the condition of a low fairness level, with the low fairness level reducing user trust. However, adding explanations helped users increase their trust in AI-assisted decision-making. From the perspective of perceived fairness, our work found that low levels of introduced fairness decreased users’ perceptions of fairness, while high levels of introduced fairness increased users’ perceptions of fairness. The addition of explanations definitely increased the perception of fairness. Furthermore, we found that application scenarios influenced trust and perceptions of fairness. The results show that the use of AI explanations and fairness statements in AI applications is complex: we need to consider not only the type of explanations and the degree of fairness introduced, but also the scenarios in which AI-assisted decision-making is used.
the use of mobile applications is increasing in the hospitals. Besides numerous advantages such as time-and location independent view of examination results, documentation of therapy at the patient site, point of care workflow and decision support, smartphone-or tablet-based mobile systems pose challenges to system manufacturers and hospital-IT operators. In this paper we present our experiences on managing mobile, Android-based medical devices in hospitals based on the GlucoTab®. GlucoTab® is a client-server system using a Google Android-based tablet as a frontend device for diabetes workflow and insulin decision support in hospitals. We analysed the hospital IT-infrastructure for mobile devices in eight different hospitals and based on recent published reports. Based on this analysis we developed suggestions on how to handle typical administrative issues like domain integration, software inventory and distribution or user management. Depending on the integration prerequisites of hospitals we finally provided five different integration scenarios for the GlucoTab® system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.