2023
DOI: 10.1016/j.inffus.2022.10.002
|View full text |Cite
|
Sign up to set email alerts
|

EmoMV: Affective music-video correspondence learning datasets for classification and retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 34 publications
0
3
0
Order By: Relevance
“…Alternatively, in an effort to collect larger quantities of affect labels in a shorter amount of time, although with a potential loss in accuracy, crowd-sourcing on platforms such as Amazon Mechanical Turk (MTurk) has also been explored [ 3 , 53 , 54 , 55 , 56 ]. Some researchers utilize a mix of both online and offline collection methods [ 57 , 58 ], or even use predictive models such as AttendAffectNet [ 59 ] for the emotion labeling [ 60 ]. Regardless of the data collection method, it is important for each musical excerpt in the dataset to be labelled by multiple participants in order to account for subjectivity.…”
Section: Data Gathering Proceduresmentioning
confidence: 99%
See 1 more Smart Citation
“…Alternatively, in an effort to collect larger quantities of affect labels in a shorter amount of time, although with a potential loss in accuracy, crowd-sourcing on platforms such as Amazon Mechanical Turk (MTurk) has also been explored [ 3 , 53 , 54 , 55 , 56 ]. Some researchers utilize a mix of both online and offline collection methods [ 57 , 58 ], or even use predictive models such as AttendAffectNet [ 59 ] for the emotion labeling [ 60 ]. Regardless of the data collection method, it is important for each musical excerpt in the dataset to be labelled by multiple participants in order to account for subjectivity.…”
Section: Data Gathering Proceduresmentioning
confidence: 99%
“…The models are simple in design and intended to be supplementary performance benchmarks on the dataset. In future work, more state-of-the-art methods such as convolutional neural networks [ 70 , 71 , 72 , 73 , 74 , 75 , 76 ], or transformer architectures [ 60 , 77 , 78 ] could be used with the dataset for further experimentation with profile information and its uses for building improved MER models.…”
Section: Emotion Prediction Modelsmentioning
confidence: 99%
“…Developments in multimedia technology have resulted in a sharp increase in the variety of digital music and its listening volume, necessitating urgent advancements in music information retrieval (MIR), which involves utilizing computer technology to automatically analyze, recognize, retrieve, and understand music. Audio music genre classification is a MIR task that involves assigning labels to each piece of music based on characteristics such as genre [1,2], mood [3,4], and artist type [5,6]. Audio music genre classification enables the automatic categorization of audio music based on different styles or types, facilitating a deeper understanding and organization of music libraries.…”
Section: Introductionmentioning
confidence: 99%