Emotion recognition by the human brain, normally incorporates context, body language, facial expressions, verbal cues, non-verbal cues, gestures and tone of voice. When considering only the face, piecing together various aspects of each facial feature is critical in identifying the emotion. Since viewing a single facial feature in isolation may result in inaccuracies, this paper attempts training neural networks to first identify specific<br>facial features in isolation, and then use the general pattern of expressions on the face to identify the overall emotion. The reason for classification inaccuracies are also examined.<br>