Proceedings of the 20th ACM International Conference on Multimedia 2012
DOI: 10.1145/2393347.2396493
|View full text |Cite
|
Sign up to set email alerts
|

Automatic music soundtrack generation for outdoor videos from contextual sensor information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 34 publications
(12 citation statements)
references
References 2 publications
0
12
0
Order By: Relevance
“…We built an offline database of music consisting of songs with the mood tags listed in Figure 2, using the method described by Yu et al [14]. In Last.fm, songs are labeled with social tags based on the feedback of different users.…”
Section: Offline Music Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…We built an offline database of music consisting of songs with the mood tags listed in Figure 2, using the method described by Yu et al [14]. In Last.fm, songs are labeled with social tags based on the feedback of different users.…”
Section: Offline Music Datasetmentioning
confidence: 99%
“…Given a sequence of ranked mood tags G recognized from the trained SVM hmm model, the most frequent mood tags are selected and used as keys to find relevant songs from the hash table. We use a modified mean reciprocal rank (MMRR) method described in Yu et al [14] to retrieve songs sorted in decreasing order of their MMRR metric and the top-N songs are returned.…”
Section: Music Track Recommendationmentioning
confidence: 99%
“…There exist a few approaches [6,27,29] to recognize emotions from videos but the field of video soundtrack recommendation for UGVs [24,34] is largely unexplored. Hanjalic et al [6] proposed a computational framework for affective video content representation and modeling based on the dimensional approach to affect.…”
Section: Related Workmentioning
confidence: 99%
“…However, the connection between them is not well explored so for. Effective matching techniques between music and image have various applications in cross-modal retrieval, music exploration [1], [2], and automatic music video generation [3], [4]. For example, music only may be tedious, but appears with image or video clips will bring more acousticvisual enjoyment.…”
Section: Introductionmentioning
confidence: 99%
“…But customizing the cover for every single music still remains an problem since an album always contain more than one song. Music generation for photo show and video have been studied in [3], [4], where emotion and contextual sensor information are utilized to help connecting music and video. Given the variety of user needs, in this work, we concentrate on the matching of music and image, one of the multimedia crossmodal matching tasks.…”
Section: Introductionmentioning
confidence: 99%