2011
DOI: 10.1002/asi.21501
|View full text |Cite
|
Sign up to set email alerts
|

Finding a good query‐related topic for boosting pseudo‐relevance feedback

Abstract: Pseudo-relevance feedback (PRF) via query expansion (QE) assumes that the top-ranked documents from the first-pass retrieval are relevant. The most informative terms in the pseudo-relevant feedback documents are then used to update the original query representation in order to boost the retrieval performance. Most current PRF approaches estimate the importance of the candidate expansion terms based on their statistics on document level. However, a document for PRF may consist of different topics, which may not… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 36 publications
(11 citation statements)
references
References 37 publications
0
9
0
Order By: Relevance
“…In this paper, we extend our previous studies around the Latent Concept Modeling framework (Deveaud et al, 2013b), which mainly consists at applying topic modeling algorithms such as LDA to a small set of pseudo-relevant feedback documents (Deveaud et al, 2013a;Ye et al, 2011). While we recall the main principles of our method in the next section, we perform a thorough evaluation of the estimated parameters and of the retrieval effectiveness.…”
Section: Related Workmentioning
confidence: 92%
“…In this paper, we extend our previous studies around the Latent Concept Modeling framework (Deveaud et al, 2013b), which mainly consists at applying topic modeling algorithms such as LDA to a small set of pseudo-relevant feedback documents (Deveaud et al, 2013a;Ye et al, 2011). While we recall the main principles of our method in the next section, we perform a thorough evaluation of the estimated parameters and of the retrieval effectiveness.…”
Section: Related Workmentioning
confidence: 92%
“…Since the top‐ k retrieved images are assumed and used as the positive feedback set, the number of the feedback set is crucial to the PRF performance (Ye, Huang, & Lin, ). However, many related studies have only used a fixed set of pseudopositive/‐negative images for PRF, such as Deselaers, Paredes et al.…”
Section: Important Factors Of Prfmentioning
confidence: 99%
“…It assumes that topranked documents in the first-pass retrieval are relevant, and then used as feedback documents in order to refine the representation of original queries by adding potentially related terms. Although PRF has been shown to be effective in improving IR performance [4,6,9,13,23,26,28,30,36,37,40,42] in a number of IR tasks, traditional PRF can also fail in some cases. For example, when some of the feedback documents have several incoherent topics, terms in the irrelevant contents are likely to misguide the feedback models by importing noisy terms into the queries.…”
Section: Introductionmentioning
confidence: 98%