Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval 2023
DOI: 10.1145/3539618.3591931
|View full text |Cite
|
Sign up to set email alerts
|

Take a Fresh Look at Recommender Systems from an Evaluation Standpoint

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(2 citation statements)
references
References 39 publications
0
2
0
Order By: Relevance
“…Evaluating recommendation systems is a crucial stage in validating their usage. Consequently, several studies have investigated the impact of time on online evaluations [31] [32] [33] because this temporal factor aids in interpreting the acceptance of the obtained recommendation results [34]. Online evaluation is the best method to evaluate RSs and CARS.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Evaluating recommendation systems is a crucial stage in validating their usage. Consequently, several studies have investigated the impact of time on online evaluations [31] [32] [33] because this temporal factor aids in interpreting the acceptance of the obtained recommendation results [34]. Online evaluation is the best method to evaluate RSs and CARS.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In this light, user experience can be potentially enhanced by advancing the traditional recommender paradigms via generative models. This workshop provides a platform to facilitate the integration of generative models into recommender systems, with a focus on user modeling, content generation, interaction patterns, trustworthiness evaluations [23], and evaluation methods [13].…”
Section: Call For Papers 41 Introductionmentioning
confidence: 99%