“…With the embodiment turn has emerged methods for collecting and analyzing multimodal data to model embodied interactions (Worsley and Blikstein, 2018;Abrahamson et al, 2021). These include data for analyzing gestures (Closser et al, 2021), eye gaze (Schneider and Pea, 2013;Shvarts and Abrahamson, 2019), facial expression (Monkaresi et al, 2016;Sinha, 2021), grip intensity (Laukkonen et al, 2021), and so on, coupled with traditional statistical methods, qualitative methods, and deep learning algorithms that model human behavior based on massive amounts of mouse click and text-based data (e.g., Facebook's DeepText, Google's RankBrain). This shift in research methods has been enabled by the proliferation of low-cost, high-bandwidth cameras and sensors that track biometrics, facial, and body movement that supplement field notes, speech, text chat, and click log data (Schneider and Radu, 2022).…”