This document is supplementary material for the paper “Changes in Research Ethics, Openness, and Transparency in Empirical Studies between CHI 2017 and CHI 2022”; published at ACM CHI 2023. The document provides the rationale behind each criterion in detail with additional citations. We hope that knowing the rationale will better encourage practices related to research ethics,openness, and transparency.The original paper and paper and all supplementary materials are freely available at: https://doi.org/10.17605/osf.io/n25d6
Automated training systems (ATS) for a range of skills are an emerging artificial intelligence (AI) technology that aims to deliver automatic assessment and feedback in the absence of human expert input. However, most current systemsare not designed to explain the assessment to help trainees improve their motivation and performance. We propose that explanations on the assessments and predictions made by these automated systems can act as feedback providing tangible informational insights, and increasing the trainee’s effectiveness and performance. Explainable AI (XAI) methods promoting human understanding of the AI models’ decisions have the potential to satisfy these informational needs of the feedback. However, explainable ATS must adhere to the main elements of feedback seen in traditional manual methods (e.g., actionable, clear impact, withexamples). This paper discusses the validity of the explanation as feedback and its challenges in designing and implementing explainable ATS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.