“…In some works, the crowdworkers are evaluated, without notifying them, by incorporating texts from the pre-annotated sample into each crowdworker task. Less accurate crowdworkers are disqualified based on comparison with the expert labels (Albadi et al, 2018(Albadi et al, , 2022Alhelbawy et al, 2016;Chowdhury et al, 2020;Mubarak, Hassan, & Chowdhury, 2022;Shannag et al, 2022). Another approach is selecting crowdworkers with good reputation scores, which are provided on the crowdsourcing platform (Ousidhoum et al, 2019).…”