Deep Neural Network (DNN) models have lately received considerable attention for that the network structure can extract deep features to improve classification accuracy and achieve excellent results in the field of image. However, due to the different content forms of music and images, transferring deep learning to music classification is still a problem. To address this issue, in the paper, we transfer the state-of-the-art DNN models to music classification and evaluate the performance of the models using spectrograms. Firstly, we convert the music audio files into spectrograms by modal transformation, and then classify music through deep learning. In order to alleviate the problem of overfitting during training, we propose a balanced trusted loss function and build the balanced trusted model ResNet50_trust. Finally, we compare the performance of different DNN models in music classification. Furthermore, this work adds music sentiment analysis based on the newly constructed music emotion dataset. Extensive experimental evaluations on three music datasets show that our proposed model Resnet50_trust consistently outperforms other DNN models.
Supervised topic modeling has been successfully applied in the fields of document classification and tag recommendation in recent years. However, most existing models neglect the fact that topic terms have the ability to distinguish topics. In this paper, we propose a term frequency-inverse topic frequency (TF-ITF) method for constructing a supervised topic model, in which the weight of each topic term indicates the ability to distinguish topics. We conduct a series of experiments with not only the symmetric Dirichlet prior parameters but also the asymmetric Dirichlet prior parameters. Experimental results demonstrate that the result of introducing TF-ITF into a supervised topic model outperforms several state-of-the-art supervised topic models.
With the universal existence of mixed data with numerical and categorical attributes in real world, a variety of clustering algorithms have been developed to discover the potential information hidden in mixed data. Most existing clustering algorithms often compute the distances or similarities between data objects based on original data, which may cause the instability of clustering results because of noise. In this paper, a clustering framework is proposed to explore the grouping structure of the mixed data. First, the transformed categorical attributes by one-hot encoding technique and normalized numerical attributes are input to a stacked denoising autoencoders to learn the internal feature representations. Secondly, based on these feature representations, all the distances between data objects in feature space can be calculated and the local density and relative distance of each data object can be also computed. Thirdly, the density peaks clustering algorithm is improved and employed to allocate all the data objects into different clusters. Finally, experiments conducted on some UCI datasets have demonstrated that our proposed algorithm for clustering mixed data outperforms three baseline algorithms in terms of the clustering accuracy and the rand index.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.