2019
DOI: 10.1145/3308774.3308780
|View full text |Cite
|
Sign up to set email alerts
|

What Should We Teach in Information Retrieval?

Abstract: Modern Information Retrieval (IR) systems, such as search engines, recommender systems, and conversational agents, are best thought of as interactive systems. And their development is best thought of as a two-stage development process: offline development followed by continued online adaptation and development based on interactions with users. In this opinion paper, we take a closer look at existing IR textbooks and teaching materials, and examine to which degree they cover the offline and online stages of the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 81 publications
(103 reference statements)
0
2
0
Order By: Relevance
“…Information retrieval (IR) evolved continuously for four decades (Robertson and Jones, 1976; Manning et al , 2008; Baeza-Yates and Ribeiro-Neto, 2011; Markov and de Rijke, 2019) from symbolic to vectorized representation, text transformation and analysis, offline and online treatment, etc. Among this field, ad hoc search aims to bring forward documents contained in a corpus related to a given query, which summarizes the expected features of common search engines nowadays.…”
Section: Introductionmentioning
confidence: 99%
“…Information retrieval (IR) evolved continuously for four decades (Robertson and Jones, 1976; Manning et al , 2008; Baeza-Yates and Ribeiro-Neto, 2011; Markov and de Rijke, 2019) from symbolic to vectorized representation, text transformation and analysis, offline and online treatment, etc. Among this field, ad hoc search aims to bring forward documents contained in a corpus related to a given query, which summarizes the expected features of common search engines nowadays.…”
Section: Introductionmentioning
confidence: 99%
“…Online ranker evaluation concerns the task of determining the ranker with the best performance out of a finite set of rankers. It is an important challenge for information retrieval systems [21,29,30]. In the absence of an oracle judge who can tell the preferences between all rankers, the best ranker is usually inferred from user feedback on the result lists produced by the rankers [16].…”
Section: Introductionmentioning
confidence: 99%