2022
DOI: 10.1007/s11042-022-14252-6
|View full text |Cite
|
Sign up to set email alerts
|

Music genre classification based on fusing audio and lyric information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 44 publications
0
3
0
Order By: Relevance
“…The proposed model achieved a high classification accuracy of 96.50% on the GTZAN dataset. A res-gated convolutional structure with an attention mechanism was also applied to the MGC task, and the model achieved a classification accuracy of 96.8% on the GTZAN dataset [87]. Parallel attention was applied to the CNN model to extract multiple features from the MLS features [89].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The proposed model achieved a high classification accuracy of 96.50% on the GTZAN dataset. A res-gated convolutional structure with an attention mechanism was also applied to the MGC task, and the model achieved a classification accuracy of 96.8% on the GTZAN dataset [87]. Parallel attention was applied to the CNN model to extract multiple features from the MLS features [89].…”
Section: Related Workmentioning
confidence: 99%
“…Because of the remarkable success of deep learning in various fields, various deep learning models such as a graph neural network [86], a convolutional neural network [60], and an attention model [87] can be applied to the MIR domain. Recently, convolutional neural networks (CNNs) have attracted huge research interests in the MIR domain due to their inherent ability to handle complex spectral features [38], [39], [41].…”
Section: Introductionmentioning
confidence: 99%
“…The achieved an overall accuracy of 90.2% on a dataset of 10 genres, demonstrating the proposed method's effectiveness. Li et al (2022) proposed a method for music classification using a transformer-based neural network architecture. The achieved an overall accuracy of 92.6% on a dataset of 10 genres, showing the effectiveness of transformer-based models for music classification.…”
Section: Backgroundmentioning
confidence: 99%
“…While audio components, which comprise several abstraction levels, can imply music-evoked emotions [1]; predict viral songs [14]; and measure music similarity [15]. Frequently, textual and audio features have been used together for classifying moods [16,17] and musical genres [18,19]. However, there is limited March 23, 2023 2/38 knowledge about how the combination of audio and lyrics may influence the association between psychological traits and music choices.…”
Section: Introductionmentioning
confidence: 99%