Proceedings of the 2002 ACM Conference on Computer Supported Cooperative Work 2002
DOI: 10.1145/587078.587096
|View full text |Cite
|
Sign up to set email alerts
|

On the recommending of citations for research papers

Abstract: Collaborative filtering has proven to be valuable for recommending items in many different domains. In this paper, we explore the use of collaborative filtering to recommend research papers, using the citation web between papers to create the ratings matrix. Specifically, we tested the ability of collaborative filtering to recommend citations that would be suitable additional references for a target research paper. We investigated six algorithms for selecting citations, evaluating them through offline experime… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
107
0

Year Published

2006
2006
2016
2016

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 288 publications
(114 citation statements)
references
References 15 publications
2
107
0
Order By: Relevance
“…The researchers proposed and evaluated several contentbased filtering (CBF) and collaborative filtering (CF) approaches for research-paper recommendations. In one experiment, CF and CBF performed similarly well (McNee et al 2002). In other experiments, CBF outperformed CF (Dong et al 2009;McNee et al 2002;Torres et al 2004), and in some more experiments CF outperformed CBF (Ekstrand et al 2010;McNee et al 2006;Torres et al 2004).…”
Section: Introductionmentioning
confidence: 84%
See 3 more Smart Citations
“…The researchers proposed and evaluated several contentbased filtering (CBF) and collaborative filtering (CF) approaches for research-paper recommendations. In one experiment, CF and CBF performed similarly well (McNee et al 2002). In other experiments, CBF outperformed CF (Dong et al 2009;McNee et al 2002;Torres et al 2004), and in some more experiments CF outperformed CBF (Ekstrand et al 2010;McNee et al 2006;Torres et al 2004).…”
Section: Introductionmentioning
confidence: 84%
“…21 % of the approaches introduced in the research papers were not evaluated. Of the evaluated approaches, 69 % were evaluated using offline evaluations, a method with serious shortcomings (Beel et al 2013c;Cremonesi et al 2011Cremonesi et al , 2012Hersh et al 2000a, b;Jannach et al 2013;Knijnenburg et al 2012;McNee et al 2002;Said 2013;Turpin and Hersh 2001). Furthermore, many approaches were evaluated against trivial (if any) baselines.…”
Section: Impact Of Differences In the Evaluationsmentioning
confidence: 99%
See 2 more Smart Citations
“…Increasingly, industry and academic researchers agree that the ultimate goal of recommenders is to help users make better decisions, and that high accuracy in itself does not guarantee this aim [8,9,11,12]. In fact, the recommender with the highest accuracy may not even be the most satisfying [7,15]. A plethora of other factors, such as the composition of the recommendation set [1], the preference elicitation method [13], and personal considerations (e.g.…”
Section: Introductionmentioning
confidence: 99%