1981
DOI: 10.1145/1013228.511764
|View full text |Cite
|
Sign up to set email alerts
|

Simulation of user judgments in bibliographic retrieval systems

Abstract: The general model and simulation algorithms for bibliographic retrieval systems presented in an earlier paper are expanded. The new model integrates the physical as well as the logical and semantic elements of these systems. A modified algorithm is developed for the simulation of user relevance judgments, and is validated, by means of recall-precision curves and a Kolmogorov-Smirnov test of recall, for two test collections. Other approaches to goodness-of-fit testing are suggested.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

1983
1983
2022
2022

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(17 citation statements)
references
References 2 publications
0
15
0
Order By: Relevance
“…Simulation has been used in many di↵erent ways in IR. For example, approaches include simulated work tasks and tracks [14,61], simulated and synthetic data collections [3,5,34,58], and simulated interaction [6,16,19,41,62]. However, simulation has been used largely for evaluation and exploration to (i) determine how well an IR system performs, or to (ii) determine how the performance changes under di↵erent conditions and behaviours [4,8,9].…”
Section: Simulation In Ir and Iirmentioning
confidence: 99%
“…Simulation has been used in many di↵erent ways in IR. For example, approaches include simulated work tasks and tracks [14,61], simulated and synthetic data collections [3,5,34,58], and simulated interaction [6,16,19,41,62]. However, simulation has been used largely for evaluation and exploration to (i) determine how well an IR system performs, or to (ii) determine how the performance changes under di↵erent conditions and behaviours [4,8,9].…”
Section: Simulation In Ir and Iirmentioning
confidence: 99%
“…In this way large numbers of queries and judgments may be obtained without need for human action, as a substitute for hand-crafted individual retrieval queries and explicitly judged sets of relevant documents. Simulated queries have been compared to manually created queries for information retrieval [8,162]. Reproducing absolute evaluation scores through simulation has been found to be challenging, as absolute scores will change with, e.g., the recall base.…”
Section: Simulating Logged User Queriesmentioning
confidence: 99%
“…Due to the complexities inherent in the interactions of different IR system components, which could not be adequately modeled, the author concluded that the simulation model required additional components to be an effective evaluative tool. Tague, Nelson, and Wu (1981) examined key difficulties with the overall simulation of retrieval systems. Their simulation model could generate both a document description based on the term distribution, indexing exhaustivity, and cooccurrence of terms, and also generated a query set.…”
Section: Related Researchmentioning
confidence: 99%