We design a personalized human-robot environment for social learning for individuals with autism spectrum disorders (ASD). In order to define an individual's profile, we posit that the individual's reliance on proprioceptive and kinematic visual cues should affect the way the individual suffering from ASD interacts with a social agent (human/robot/virtual agent). In this paper, we assess the potential link between recognition performances of body/facial expressions of emotion of increasing complexity, emotion recognition on platforms with different visual features (two mini-humanoid robots, a virtual agent, and a This is one of several papers published in Autonomous Robots comprising the "Special Issue on Assistive and Rehabilitation human), and proprioceptive and visual cues integration of an individual. First, we describe the design of the EMBODI-EMO database containing videos of controlled body/facial expressions of emotions from various platforms. We explain how we validated this database with typically developed (TD) individuals. Then, we investigate the relationship between emotion recognition and proprioceptive and visual profiles of TD individuals and individuals with ASD. For TD individuals, our results indicate a relationship between profiles and emotion recognition. As expected, we show that TD individuals that rely more heavily on visual cues yield better recognition scores. However, we found that TD individuals relying on proprioception have better recognition scores, going against our hypothesis. Finally, participants with ASD relying more heavily on proprioceptive cues have lower emotion recognition scores on all conditions than participants relying on visual cues.