“…However, the practice of normalizing multimodal data may require greater awareness of the underlying science about how the multimodal technology or modelling algorithms work. Within the MMLA community, this normalization process has frequently been applied with audio (Bassiou et al, 2016), electro-dermal activation (Dindar et al, 2020;Worsley & Blikstein, 2018), and facial expression (Grafsgaard et al, 2014;Worsley, Scherer, Morency, & Blikstein, 2015) analysis, as well as gesture classification (Schneider & Blikstein, 2015;Worsley & Blikstein, 2013). Given the extensive research on bias in facial expression and face recognition analysis (Xu, White, Kalkan, & Gunes, 2020), based on race, gender, and ethnicity, for example, there is an unmistakable need to effectively normalize the data and account for individual and group differences.…”