Music emotion information is widely used in music information retrieval, music recommendation, music therapy, and so forth. In the field of music emotion recognition (MER), computer scientists extract musical features to identify musical emotions, but this method ignores listeners’ individual differences. Applying machine learning methods, this study formed relations among audio features, individual factors, and music emotions. We used audio features and individual features as inputs to predict the perceived emotion and felt emotion of music, respectively. The results show that real-time individual features (e.g., preference for target music and mechanism indices) can significantly improve the model’s effect, and stable individual features (e.g., sex, music experience, and personality) have no effect. Compared with the recognition models of perceived emotions, the individual features have greater effects on the recognition models of felt emotions.
The identification of a spatial pattern (target) presented to one fingerpad may be interfered with by the presentation of a second pattern (nontarget) to either the same fingerpad or a second fingerpad. A portion of the interference appears to be due to masking and a portion to response competition. In the present study, vibrotactile spatial patterns were designed to extend over two fingerpads. Target and nontarget patterns were presented to the same two fingerpads with a temporal separation between the two patterns. The function relating target identification to the temporal separation between the target and nontarget was very similar to the functions obtained with one-finger patterns in temporal masking studies. Subsequent measurements showed that a substantial portion of the interference resulted from response competition. Pattern categorization was better when patterns were presented to two fingers on opposite hands than to two fingers on the same hand; however, there was more interference for patterns presented bilaterally than for patterns presented ipsilaterally. The results supported the conclusion that similar processes are involved in the perception of sequences of spatial patterns whether the patterns are presented to one or to two fingers.
As one of the most popular social media platforms in China, Weibo has aggregated huge numbers of texts containing people's thoughts, feelings, and experiences. Analyzing emotions expressed on Weibo has attracted a great deal of academic attention. Emotion lexicon is a vital foundation of sentiment analysis, but the existing lexicons still have defects such as a limited variety of emotions, poor crossscenario adaptability, and confusing written and online expressions and words. By combining grounded theory and semi-automatic methods, we built a Weibo-based emotion lexicon for sentiment analysis. We first took a bottom-up approach to derive a theoretical model for emotions expressed on Weibo, and the substantive coding led to eight core emotion categories: joy, expectation, love, anger, anxiety, disgust, sadness, and surprise. Second, we built a new emotion lexicon containing 2,964 words by manually selecting seed words, constructing a word vector model to expand words, and making rules to filter words. Finally, we tested the effectiveness of our lexicon by using a lexicon-based approach to recognize the emotions expressed in Weibo text. The results showed that our lexicon performed better in Weibo emotion recognition than five other Chinese emotion lexicons. This study proposed a method to construct an emotion lexicon that considered both theory and application by combining qualitative research and artificial intelligence methods. Our work also provided a reference for future research in the field of social media sentiment analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.