Every piece of music contains emotion in every sound presented. Detection of the music emotion is quite difficult to do because the emotions felt are subjective. Based on this problem, it is necessary to have an automatic classification system to detect the emotions produced in music. In this paper, an explanation of the result to develop an emotional classification system of instrumental music. This system described the process starting with the receiving an input in the form of a music file in the format wav. Furthermore, the feature extraction process is carried out using Mel-Frequency Cepstral Coefficients (MFCC). The result of the extraction of such features are used in the classification process using the K-Nearest Neighbor (K-NN). The system produced output in the form of happy, relaxed, and sad emotions. The output of the system has a classification achieved an accuracy of 97.5% for the value of k = 1, reaching an accuracy of 95% for the value of k = 2.95% and for k = 3, reaching an accuracy of up to 90%.