The popularity of the Internet has brought the rapid development of artificial intelligence, affective computing, Internet of things (IoT), and other technologies. Particularly, the development of IoT provides more references for the realization of smart home. However, when people have achieved a certain amount of material satisfaction, they are more likely to want to communicate emotionally. Music contains a lot of emotion information. Music data is an important communication way between people and a better way to convey emotions. Therefore, it has become one of the most convenient and natural interactive ways expected by people in intelligent human-computer interaction. Traditional music emotion recognition methods have some demerits such as low recognition rate and time-consuming. So, we propose a generative adversarial network (GAN) model based on intelligent data analytics for music emotion recognition under IoT. Driven by the double-channel fusion strategy, the GAN can effectively extract the local and global features of the image or voice. Meanwhile, in order to increase the feature difference between the emotional voices, the feature data matrix of the Meyer frequency cepstrum coefficient of the music signals is transformed to improve the expression ability of the GAN. The experiment results show that the proposed model can effectively recognize the music emotion. Compared with other state-of-the-art approaches, the error recognition rate of proposed music music data recognition is greatly reduced. In terms of the accuracy, it exceeds 87% which is higher than that of other methods.