In the field of human-robot interaction, socially interactive robots are often equipped with the ability to detect the affective states of users, the ability to express emotions through the use of synthetic facial expressions, speech and textual content, and the ability for imitating and social learning. Past work on creating robots that can make convincing emotional expressions has concentrated on the quality of those expressions, and on assessing people's ability to recognize them. Previous recognition studies presented the facial expressions of the robots in neutral contexts, without any strong emotional valence (e.g., emotionally valenced music or video). It is therefore worth empirically exploring whether observers' judgments of the facial cues of a robot would be affected by a surrounding emotional context. This thesis takes its inspiration from the contextual effects found on the interpretation of the expressions on human faces and computer avatars, and looks at the extent to which they also apply to the interpretation of the facial expressions of a mechanical robot head. The kinds of contexts that affect the recognition of robot emotional expressions, the circumstances under which such contextual effects occur, and the relationship between emotions and the surrounding situation, are observed and analyzed in a series of 11 experiments. In these experiments, the FACS (Facial Action Coding System) (Ekman and Friesen, 2002) was applied to set up the parameters of the servos to make the robot head produce sequences of facial expressions. Four different emotional surrounding or preceding contexts were used (i.e., recorded BBC News pieces, selected affective pictures, classical music pieces and film clips). This thesis provides evidence that observers' judgments about the facial expressions of a robot can be affected by a surrounding emotional context. From a psychological perspective, the contextual effects found on the robotic facial expressions based on the FACS, indirectly support the claims that human emotions are both 4 biologically based and socially constructed. From a robotics perspective, it is argued that the results obtained from the analyses will be useful for guiding researchers to enhance the expressive skills of emotional robots in a surrounding emotional context. This thesis also analyzes the possible factors contributing to the contextual effects found in the original 11 experiments.Some future work, including four new experiments (a preliminary experiment designed to identify appropriate contextual materials and three further experiments in which factors likely to affect a context effect are controlled one by one) is also proposed in this thesis.