Groups often face difficulty reaching consensus. For complex decisions with multiple criteria, verbal and written discourse alone may impede groups from pinpointing and moving past fundamental disagreements. To help support consensus building, we introduce ConsensUs, a novel visualization tool that highlights disagreement by asking group members to quantify their subjective opinions across multiple criteria. To evaluate this approach, we conducted a between-subjects experiment with 87 participants on a comparative hiring task. The study compared three modes of sensemaking on a group decision: written discourse only, visualization only, and written discourse plus visualization. We confirmed that the visualization helped participants identify disagreements within the group and then measured subsequent changes to their individual opinions. The results show that disagreement highlighting led participants to align their ratings more with the opinions of other group members. While disagreement highlighting led to better score alignment, participants reported a number of reasons for shifting their score, from genuine consensus to appeasement. We discuss further research angles to understand how disagreement highlighting affects social processes and whether it produces objectively better decisions.
From diagnosis to patient scheduling, AI is increasingly being considered across different clinical applications. Despite increasingly powerful clinical AI, uptake into actual clinical workflows remains limited. One of the major challenges is developing appropriate trust with clinicians. In this paper, we investigate trust in clinical AI in a wider perspective beyond user interactions with the AI. We offer several points in the clinical AI development, usage, and monitoring process that can have a significant impact on trust. We argue that the calibration of trust in AI should go beyond explainable AI and focus on the entire process of clinical AI deployment. We illustrate our argument with case studies from practitioners implementing clinical AI in practice to show how trust can be affected by different stages in the deployment cycle.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.