2017
DOI: 10.1007/978-3-319-53676-7_3
|View full text |Cite
|
Sign up to set email alerts
|

How to Combine Visual Features with Tags to Improve Movie Recommendation Accuracy?

Abstract: LNBIP reports state-of-the-art results in areas related to business information systems and industrial application software development -timely, at a high level, and in both printed and electronic form. The type of material published includes• Proceedings (published in time for the respective event)• Postproceedings (consisting of thoroughly revised and/or extended final papers) • Other edited monographs (such as, for example, project reports or invited volumes) • Tutorials (coherently integrated collections o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
3
2
1

Relationship

4
2

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 31 publications
0
10
0
Order By: Relevance
“…In Deldjoo et al (2016bDeldjoo et al ( , c, 2018d we specifically addressed the under-researched problem of combining visual features extracted from movies with available semantic information embedded in metadata or collaborative data available in users' interaction patterns in order to improve offline recommendation quality. To this end, for multimodal fusion (i.e., fusing features from different modalities) in Deldjoo et al (2016b), for the first time, we investigated adoption of an effective data fusion technique named canonical correlation analysis (CCA) to fuse visual and textual features extracted from movie trailers. A detailed discussion about CCA can be found in Sect.…”
Section: Contributions Of This Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In Deldjoo et al (2016bDeldjoo et al ( , c, 2018d we specifically addressed the under-researched problem of combining visual features extracted from movies with available semantic information embedded in metadata or collaborative data available in users' interaction patterns in order to improve offline recommendation quality. To this end, for multimodal fusion (i.e., fusing features from different modalities) in Deldjoo et al (2016b), for the first time, we investigated adoption of an effective data fusion technique named canonical correlation analysis (CCA) to fuse visual and textual features extracted from movie trailers. A detailed discussion about CCA can be found in Sect.…”
Section: Contributions Of This Workmentioning
confidence: 99%
“…Although a small number of visual features were used to represent the trailer content (similar to Deldjoo et al 2016d), the results of offline recommendation using 14K trailers suggested the merits of the proposed fusion approach for the recommendation task. In Deldjoo et al (2018d) we extended (Deldjoo et al 2016b) and used both low-level visual features (color-and texture-based) using the MPEG-7 standard together with deep learning features in a hybrid CF and CBF approach. The aggregated and fused features were ultimately used as input for a collective sparse linear method (SLIM) (Ning and Karypis 2011) method, generating an enhancement for the CF method.…”
Section: Contributions Of This Workmentioning
confidence: 99%
“…Motivated by the approach proposed in [14], we employed the fusion method based on Canonical Correlation Analysis (CCA) which exploits the low-level correlation between two set of visual features and learns a linear transformation that maximizes the pairwise correlation between two set of MPEG-7 and deep-learning networks visual features.…”
Section: Feature Fusionmentioning
confidence: 99%
“…As for the last approach, the hybridization can be done at feature-level (e.g. in [12]) or ensemble-level. We chose the second approach for its ease of implementation.…”
Section: Procedurementioning
confidence: 99%