Running title: Brain representation of computational body features 12 Abstract 1Humans and other primate species are experts at recognizing affective information from body 2 movements but the underlying brain mechanisms are still largely unknown. Previous research 3 focusing on the brain representation of symbolic emotion categories has led to mixed results.
4This study used representational similarity and multi-voxel pattern analysis techniques to 5 investigate how postural and kinematic features computed from affective whole-body 6 movement videos are related to brain processes. We show that body posture and kinematics 7 differentially activated brain regions indicating that this information might be selectively 8 encoded in these regions. Most specifically, the feature limb contraction seemed to be 9 particularly relevant for distinguishing fear and it was represented in several regions spanning 10 affective, action observation and motor preparation networks. Our approach goes beyond 11 traditional methods of mapping symbolic emotion categories to brain activation/deactivation 12 by discovering which specific movement features are encoded in the brain, and possibly drive 13 automatic emotion perception. 14 15 16 17 18 19 20 21 accuracy (>80%). Detailed information regarding the recording and validation of these stimuli 1 can be found in Kret et al. (2011b). 2 Pose estimation 3 The state-of-the-art 2D pose estimation library OpenPose (v1.0.1, Cao et al., 2017) was used to 4 estimate the pose of each actor in the video stimuli. By means of a convolutional neural 5 network, OpenPose estimates the position (i.e. x-and y-coordinates) of a total of 18 keypoints 6 corresponding to the main body joints (i.e. ears, eyes, nose, neck, shoulders, elbows, hands, left 7 and right part of the hip, knees and feet). Subsequently, a skeleton is produced by association 8 of pairs of keypoints using part affinity fields (see Figure 1.A for examples of our stimuli with 9the OpenPose skeleton). In the current study, the keypoints belonging to the eyes and ears were 10 excluded from further analyses since the blurring of the actors' faces often resulted in an 11 inaccurate location estimation. The keypoint corresponding to the nose was kept, however, as a 12 reference for head position. Therefore, the x-and y-coordinates were obtained for a total of 14 13