With the widespread use of toxic language online, platforms are increasingly using automated systems that leverage advances in natural language processing to automatically flag and remove toxic comments. However, most automated systems, when detecting and moderating toxic language, do not provide feedback to their users, let alone provide an avenue of recourse for these users to make actionable changes. We present our work, RECAST, an interactive, open-sourced web tool for visualizing these models' toxic predictions, while providing alternative suggestions for flagged toxic language. Our work also provides users with a new path of recourse when using these automated moderation tools. RECAST highlights text responsible for classifying toxicity, and allows users to interactively substitute potentially toxic phrases with neutral alternatives. We examined the effect of RECAST via two large-scale user evaluations, and found that RECAST was highly effective at helping users reduce toxicity as detected through the model. Users also gained a stronger understanding of the underlying toxicity criterion used by black-box models, enabling transparency and recourse. In addition, we found that when users focus on optimizing language for these models instead of their own judgement (which is the implied incentive and goal of deploying automated models), these models cease to be effective classifiers of toxicity compared to human annotations. This opens a discussion for how toxicity detection models work and should work, and their effect on the future of online discourse.
This paper explains the validation of Computational-Rabi's Driver Training (C-RDT) model for Primed Decision-making. To prove the workability of this model, evaluation using validation is indispensable. Hence, validation is a method used to ensure the logical correctness of the model. Therefore, evaluating the model by validation method is yet to be achieved in literature. Hence, this study bridged this gap by providing it, and it serves as the novelty of this study. To validate the C-RDT model, experimental method was adopted whereby an experiment was conducted using human. An adapted game driving simulator features were mapped with the external factors of the awareness component of the model, and the instrument used for the validation was also designed based on the external and temporal factors of the training component of the model. Only a post-test experiment involved to examine the effectiveness of the model factors. The experiment determines the effect of the training with the game simulator on the automaticity of the driver to make effective prime decision. For the experiment, participants were divided into control and experimental groups. The experimental group were trained while the control group were not trained. Two hypotheses were set based on the outcomes of training: H0 and H1. The results obtained has shown that the experimental group participants have better decision-making skill as compared to control group. This supports the alternative hypothesis (H1), which implies that training factors in the model improved the driver's prime decision-making during emergency and this proved the validity of the model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.