1995
DOI: 10.1016/0306-4573(94)00051-4
|View full text |Cite
|
Sign up to set email alerts
|

Large test collection experiments on an operational, interactive system: Okapi at TREC

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
105
0
2

Year Published

1996
1996
2012
2012

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 136 publications
(108 citation statements)
references
References 7 publications
1
105
0
2
Order By: Relevance
“…The µ parameter chosen is the one that optimised the performance for each metric in every collection, picked up from a reasonable set of possible choices 3 . The second weighting function considered was the probabilistic Okapi's Best Match25 (BM25) [10] which has proved to be robust, high-performing and stable in many IR studies. The behaviour of the BM25 scores is governed by three parameters, namely k 1 , k 3 , and b.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The µ parameter chosen is the one that optimised the performance for each metric in every collection, picked up from a reasonable set of possible choices 3 . The second weighting function considered was the probabilistic Okapi's Best Match25 (BM25) [10] which has proved to be robust, high-performing and stable in many IR studies. The behaviour of the BM25 scores is governed by three parameters, namely k 1 , k 3 , and b.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…The behaviour of the BM25 scores is governed by three parameters, namely k 1 , k 3 , and b. Some studies ( [5]) have shown that both k 1 and k 3 have little impact on retrieval performance, so for the rest of the paper they are set as constant to the values recommended in [10] (k 1 = 1.2, k 3 = 1000). The b parameter controls the document length normalisation factor and it has been optimised in the same way as λ for JM (parameter exploration in the (0, 1] range with 0.05 steps), independently for each metric and collection.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Popular similarity functions such as the Okapi function [13] and the Cosine function can be used to compute the similarity between a retrieved result and a query.…”
Section: Ranking Preferencesmentioning
confidence: 99%
“…Second, different users may have different search goals even when they submit the same query. Some search algorithms (e.g., PageRank [13]) tend to retrieve results that cover the most popular meanings/usages of query terms. For example, when "apple" was submitted to Google on May 24th, 2012, all search results in the first result page are related to the company Apple.…”
Section: Introductionmentioning
confidence: 99%