The uptake of artifcial intelligence-based applications raises concerns about the fairness and transparency of AI behaviour. Consequently, the Computer Science community calls for the involvement of the general public in the design and evaluation of AI systems. Assessing the fairness of individual predictors is an essential step in the development of equitable algorithms. In this study, we evaluate the efect of two common visualisation techniques (text-based and scatterplot) and the display of the outcome information (i.e., ground-truth) on the perceived fairness of predictors. Our results from an online crowdsourcing study (N = 80) show that the chosen visualisation technique signifcantly alters people's fairness perception and that the presented scenario, as well as the participant's gender and past education, infuence perceived fairness. Based on these results we draw recommendations for future work that seeks to involve non-experts in AI fairness evaluations.
CCS CONCEPTS• Human-centered computing → Human computer interaction (HCI); Collaborative and social computing; Empirical studies in collaborative and social computing.