1986
DOI: 10.1177/016555158601200112
|View full text |Cite
|
Sign up to set email alerts
|

Weighting, ranking and relevance feedback in a front—end system

Abstract: A prototype front-end system—Cirt—which permits weighting, ranking and relevance feedback on a traditional IR system—Data—Star—is described and discussed. Cirt is based on an integrated theory of search term weighting, document ranking and modification of weights based on relevance feedback. Previous laboratory tests on various aspects of the theory have led to the need for further evaluation in an operational environment; the intention of Cirt is to make such evaluation possible.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

1987
1987
2012
2012

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(8 citation statements)
references
References 6 publications
0
8
0
Order By: Relevance
“…The other method to perform query expansion, perhaps the most effective of all, is that of relevance feedback [11]. In this method the user submits a query, which capitulate an initial set of results.…”
Section: Query Expansionmentioning
confidence: 99%
“…The other method to perform query expansion, perhaps the most effective of all, is that of relevance feedback [11]. In this method the user submits a query, which capitulate an initial set of results.…”
Section: Query Expansionmentioning
confidence: 99%
“…One example described in Robertson, Thompson and Macaskill (1986) required a very large sample of independent searches (e.g., 500 topics). A relatively small number of search topics was usually used in a matched-pair design study (between two and eight), while a non-matched-pair design required a very large number of search topics.…”
Section: Search Topic Variabilitymentioning
confidence: 99%
“…The information provided by the query is examined in Losee ( 1988 ). The second way that parameters' values may be estimated is through relevance feedback, information provided by the searcher about what the user finds to be of interest (Bookstein, 1983;Chow & Yu, 1982;Losee, 1988;Moon, 1993;Robertson, Thompson, Macaskill, & Bovey, 1986;Smeaton, 1984;Spink, 1995 ). When estimating probabilities for use in the probabilistic model, it is necessary to either assume statistical independence of terms or to formally incorporate some form of statistical dependence between docu-ment features (Chow & Liu, 1968;Cooper, 1995;Croft, 1986;Lam & Yu, 1982;Losee, 1994aLosee, , 1995aVan Rijsbergen, 1977;Yu, Buckley, Lam, & Salton, 1983 ).…”
Section: Models Of Retrieval and Termmentioning
confidence: 99%