2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI) 2018
DOI: 10.1109/cisp-bmei.2018.8633223
|View full text |Cite
|
Sign up to set email alerts
|

Music Emotions Recognition Based on Feature Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…This method still showed the ability to obtain timing information. In the same year, Lv et al (2018) classified the audio based on the traditional SVM method, which verified that the traditional method still remained effective in audio processing. Hu et al (2018) proposed a model method for the emotion classification of Chinese pop music.…”
Section: Related Workmentioning
confidence: 82%
“…This method still showed the ability to obtain timing information. In the same year, Lv et al (2018) classified the audio based on the traditional SVM method, which verified that the traditional method still remained effective in audio processing. Hu et al (2018) proposed a model method for the emotion classification of Chinese pop music.…”
Section: Related Workmentioning
confidence: 82%
“…Sereval works also used other parameters instead of biosignals for emotion recognition. Among them, we can highlight the groups that assess emotion through audio analysis (Greer et al, 2020;Lv et al, 2018;Mo & Niu, 2019;Lopes et al, 2019;Kumar et al, 2016;Panda et al, 2020;Chapaneri & Jayaswal, 2018). Textual aspects also have great potential in the field of emotion recognition in music.…”
Section: Emotion Recognition Associated With Acoustic Stimulimentioning
confidence: 99%
“…Malheiro et al (2018), in turn, involved both types of problem. Another aspect observed was that the SVM was the algorithm that was most present in the returned studies (Wardana et al, 2018;Hsu et al, 2020;Lv et al, 2018;Dutta et al, 2020;Mo & Niu, 2019;Bakhtiyari et al, 2019;Lopes et al, 2019;Bo et al, 2017;Nemati & Naghsh-Nilchi, 2017;Rahman et al, 2020;Nawa et al, 2018;Dantcheva et al, 2017), followed by Artificial Neural Networks (Zhang et al, 2019;Lv et al, 2018;Dutta et al, 2020;Goyal et al, 2016;Marimpis et al, 2020;Bakhtiyari et al, 2019;Rahman et al, 2019;Rahman et al, 2020;Chapaneri & Jayaswal, 2018).…”
Section: Emotion Recognition Associated With Acoustic Stimulimentioning
confidence: 99%
“…(Nasrullah & Zhao, 2018) took temporal structure as a feature for artist classification using a convolutional recurrent neural network (CRNN) and experimented on the artist20 music dataset. (Lv et al, 2018) used a support vector machine and convolutional neural network for music emotion classification such as sad, exciting, serene, and happy using music feature set as input. (Ghosal & Kolekar, 2018) applied an ensemble technique of convolutional neural network and long short-term memory (CNN LSTM) and transfer learning model for music genre recognition.…”
Section: Background Workmentioning
confidence: 99%