The proposed framework in this paper has the primary objective of classifying the facial expression shown by a person. These classifiable expressions can be any one of the six universal emotions along with the neutral emotion. After the initial facial localization is performed, facial landmark detection and feature extraction are applied where in the landmarks are determined to be the fiducial features: the eyebrows, eyes, nose and lips. This is primarily done using the Sobel operator and the Hough transform followed by Shi Tomasi corner point detection. This leads to input feature vectors being formulated using Euclidean distances and trained into a Multi-Layer Perceptron (MLP) neural network in order to classify the expression being displayed. The results achieved have further dealt with higher uniformity in certain emotions and the inherently subjective nature of expression.