Unlike their human counterparts, arti cial agents such as robots and game characters may be deployed with a large variety of face and body con gurations. Some have articulated bodies but lack facial features, and others may be talking heads ending at the neck. Generally, they have many fewer degrees of freedom than humans through which they must express themselves, and there will inevitably be a ltering e ect when mapping human motion onto the agent. In this paper, we investigate ltering e ects on three types of embodiments, a) an agent with a body but no facial features, b) an agent with a head only and c) an agent with a body and a face. We performed a full performance capture of a mime actor enacting short interactions varying the non-verbal expression along ve dimensions (e.g. level of frustration and level of certainty) for each of the three embodiments. We performed a crowd sourced evaluation experiment comparing the video of the actor to the video of an animated robot for the di erent embodiments and dimensions. Our ndings suggest that the face is especially important to pinpoint emotional reactions, but is also most volatile to ltering e ects. e body motion on the other hand had more diverse interpretations, but tended to preserve the interpretation a er mapping, and thus proved to be more resilient to ltering.