With the arrival of the fourth industrial revolution, new technologies that integrate emotional intelligence into existing IoT applications are being studied. Of these technologies, emotional analysis research for providing various music services has received increasing attention in recent years. In this paper, we propose an emotion-based automatic music classification method to classify music with high accuracy according to the emotional range of people. In particular, when the new (unlearned) songs are added to a music-related IoT application, it is necessary to build mechanisms to classify them automatically based on the emotion of humans. This point is one of the practical issues for developing the applications. A survey for collecting emotional data is conducted based on the emotional model. In addition, music features are derived by discussing with the working group in a small and medium-sized enterprise. Emotion classification is carried out using multiple regression analysis and support vector machine. The experimental results show that the proposed method identifies most of induced emotions felt by music listeners and accordingly classifies music successfully. In addition, comparative analysis is performed with different classification algorithms, such as random forest, deep neural network and K-nearest neighbor, as well as support vector machine.