Chatbots or conversational recommenders have gained increasing popularity as a new paradigm for Recommender Systems (RS). Prior work on RS showed that providing explanations can improve transparency and trust, which are critical for the adoption of RS. Their interactive and engaging nature makes conversational recommenders a natural platform to not only provide recommendations but also justify the recommendations through explanations. The recent surge of interest inexplainable AI enables diverse styles of justification, and also invites questions on how styles of justification impact user perception. In this article, we explore the effect of “why” justifications and “why not” justifications on users’ perceptions of explainability and trust. We developed and tested a movie-recommendation chatbot that provides users with different types of justifications for the recommended items. Our online experiment (
n
= 310) demonstrates that the “why” justifications (but not the “why not” justifications) have a significant impact on users’ perception of the conversational recommender. Particularly, “why” justifications increase users’ perception of system transparency, which impacts perceived control, trusting beliefs and in turn influences users’ willingness to depend on the system’s advice. Finally, we discuss the design implications for decision-assisting chatbots.
To protect vital health program funds from being paid out on services that are wasteful and inconsistent with medical practices, government healthcare insurance programs need to validate the integrity of claims submitted by providers for reimbursement. However, due the complexity of healthcare billing policies and the lack of coded rules, maintaining “integrity” is a labor-intensive task, often narrow-scope and expensive. We propose an approach that combines deep learning and an ontology to support the extraction of actionable knowledge on benefit rules from regulatory healthcare policy text. We demonstrate its feasibility even in the presence of small ground truth labeled data provided by policy investigators. Leveraging deep learning and rich ontological information enables the system to learn from human corrections and capture better benefit rules from policy text, beyond just using a deterministic approach based on pre-defined textual and semantic pattterns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.