“…Research in crowdsourcing has focused on several different issues: aggregating labels from multiple assessors to improve the quality of the gathered assessments, by using unsupervised [Bashir et al, 2013;Hosseini et al, 2012], supervised [Pillai et al, 2013;Raykar and Yu, 2012;Raykar et al, 2010], and hybrid [Harris and Srinivasan, 2013] approaches; behavioural aspects [Kazai et al, 2012b]; proper and careful design of Human Intelligent Tasks (HITs) [Alonso, 2013;Grady and Lease, 2010;Ipeirotis and Gabrilovich, 2014;Kazai et al, 2011], also using gamification to improve quality [Eickhoff et al, 2012] and game theory to increase user engagement [Moshfeghi et al, 2016]; human-machine collaborative methods for training crowdsource workers [Abad, 2017;Abad et al, 2017]; and, routing tasks to proper assessors [Jung and Lease, 2015;Law et al, 2011].…”