2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2017
DOI: 10.1109/smc.2017.8122750
|View full text |Cite
|
Sign up to set email alerts
|

Content-based top-N recommendations with perceived similarity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…Their experiments showed that users indeed preferred the recommendations that considered the human similarity judgments. The results by Colucci et al (2016) and Wang et al (2017) indicate that humans largely agree in their similarity perceptions of item pairs, while these perceptions are not aligned with algorithmic similarity functions. Further, their works demonstrate that different similarity functions result in discrepancies between objective similarity measures and human perception.…”
Section: Assessing the Similarity Perception Of Item Pairsmentioning
confidence: 87%
See 1 more Smart Citation
“…Their experiments showed that users indeed preferred the recommendations that considered the human similarity judgments. The results by Colucci et al (2016) and Wang et al (2017) indicate that humans largely agree in their similarity perceptions of item pairs, while these perceptions are not aligned with algorithmic similarity functions. Further, their works demonstrate that different similarity functions result in discrepancies between objective similarity measures and human perception.…”
Section: Assessing the Similarity Perception Of Item Pairsmentioning
confidence: 87%
“…Yet, these human judgments were only partly aligned with the output of three algorithmic similarity functions (with the highest precision value being only .55). Building on this dataset, Wang et al (2017) designed two content-based recommendation approaches, where one considered human perceptions in the recommendation process whereas the other did not. Their experiments showed that users indeed preferred the recommendations that considered the human similarity judgments.…”
Section: Assessing the Similarity Perception Of Item Pairsmentioning
confidence: 99%
“…A problem with the validation is that since every y cnd is a new checklist, the ground truths l needed to evaluate y cnd can be missing. A common solution to this problem is to collect the ground truth empirically [23], but this is not an option for us. To get a meaningful validation result, the performance statistics for the evaluation need to be estimated.…”
Section: Experiments 3: Evaluation Of Constructed Checklistsmentioning
confidence: 99%
“…CB similarity could alternately be done with tags or keywords, contributed by users or domain experts; MovieLens released a set of user-contributed tags for movies via the Tag Genome Project [VSR12]. CB-similarity can also perform YML recommendations [CZG16; NK11], and improving CB similarity using supervised learning can improve YML recommendations [WAL17]. Both approaches have its own downsides; CF requires user data making it unsuitable for new items, while CB could produce only obvious recommendations.…”
Section: Related Workmentioning
confidence: 99%