“…This theme synthesises the discussion of how and why AIPSED algorithms can lead to unfair, biased and inaccurate outcomes/decisions. Concerns about algorithmic fairness and bias (and the resulting discrimination) are raised by many authors (Akgun & Greenhow, 2022;Berendt et al, 2020;Bu, 2022;Corbeil & Corbeil, 2021;Crompton et al, 2022;Farrow, 2023;Filgueiras, 2023;Huang, 2023;Kasneci et al, 2023;Kitto & Knight, 2019;Lameras & Arnab, 2021;Leaton Gray, 2019;Li & Gu, 2023;Luan et al, 2020;Mavrikis et al, 2021;Nguyen et al, 2023;Pea et al, 2023;Saputra et al, 2023;Schiff, 2022;Su & Yang, 2023;Wei & Niemi, 2023). Specific concerns include a lack of attention to fairness in the development of AIPSED (Yan et al, 2023), excessive emphasis on certain pedagogies and learning styles (Schiff, 2021;Holstein & Doroudi, 2022;Porayska-Pomsta et al, 2023), prioritising developers' own cultures and worldviews (Mohammed & Watson, 2019;Schiff, 2021), presenting students with tasks poorly related with their ability that impede learning (Du Boulay, 2022a), difficulties surrounding the ongoing monitoring of the tools for bias (Dieterle et al, 2022), replication of historical biases present in educational practices and the society as a whole (Bartoletti, 2022;Holstein & Doroudi, 2022;Madaio et al, 2022;Rowe, 2019;Treviranus, 2022), and classification of students according to reductive categories (e.g., demog...…”