2018
DOI: 10.1016/j.knosys.2018.07.001
|View full text |Cite
|
Sign up to set email alerts
|

Background music recommendation based on latent factors and moods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(15 citation statements)
references
References 27 publications
0
15
0
Order By: Relevance
“…Emotion tag-based work uses the relatedness between visual tags and music mood tags [5,6,9]. Moreover, emotion model [4] has also been used to match video-music pairs [1,3]. These emotion tags for the video, however, were manually annotated from crowdsourcing, which is high in labor and time costs.…”
Section: Audiovisual Cross-modal Matchingmentioning
confidence: 99%
See 2 more Smart Citations
“…Emotion tag-based work uses the relatedness between visual tags and music mood tags [5,6,9]. Moreover, emotion model [4] has also been used to match video-music pairs [1,3]. These emotion tags for the video, however, were manually annotated from crowdsourcing, which is high in labor and time costs.…”
Section: Audiovisual Cross-modal Matchingmentioning
confidence: 99%
“…Uploaders can record or make videos and select appropriate background music by themselves. However, the huge music pool makes manual selection time-consuming and labor-intensive, so it is necessary to recommend suitable background music for micro-videos [1,2].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The recommendations are generated by finding the nearest neighbours of the projected video features from the database of projected music features. Liu and Chen [79] proposed to extract low-level features for both videos and musics, paying special attention to the emotional undertones of the video contents and the musics. The relevance scores are then calculated for recommendation, based on a custom designed emotion-aware scoring function.…”
Section: Music Recommendation and Generationmentioning
confidence: 99%
“…The contextual recommendation of songs is based on those emotional states. A mood based recommendation of music for videos is proposed in [69]. Both videos and music are projected onto a latent emotional space and a latent factor model is used to find the relevance between videos and music.…”
Section: Temporal Contextmentioning
confidence: 99%