Proceedings of HLT-NAACL 2004: Short Papers on XX - HLT-NAACL '04 2004
DOI: 10.3115/1613984.1614008
|View full text |Cite
|
Sign up to set email alerts
|

Direct maximization of average precision by hill-climbing, with a comparison to a maximum entropy approach

Abstract: We describe an algorithm for choosing term weights to maximize average precision. The algorithm performs successive exhaustive searches through single directions in weight space. It makes use of a novel technique for considering all possible values of average precision that arise in searching for a maximum in a given direction. We apply the algorithm and compare this algorithm to a maximum entropy approach.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2006
2006
2016
2016

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 7 publications
0
13
0
Order By: Relevance
“…In this respect, good retrieval performance requires that the model do well with many different kinds of "word" queries while annotation requires one to do well at finding as many keywords as possible for images. This issue has also been observed in text retrieval where it has been shown that the likelihood surface is unlikely to correlate with the retrieval metric surface [21,19].…”
Section: Introductionmentioning
confidence: 85%
“…In this respect, good retrieval performance requires that the model do well with many different kinds of "word" queries while annotation requires one to do well at finding as many keywords as possible for images. This issue has also been observed in text retrieval where it has been shown that the likelihood surface is unlikely to correlate with the retrieval metric surface [21,19].…”
Section: Introductionmentioning
confidence: 85%
“…However, this has been shown experimentally to be invalid and it can also be shown theoretically to be invalid, as well. This is known as metric divergence [22].…”
Section: Parameter Estimationmentioning
confidence: 99%
“…Feng and Manmatha [9] showed that this approach worked for image retrieval and we also maximised average precision. We followed a hill-climbing mean average precision optimisation as explained in [25].…”
Section: Model Trainingmentioning
confidence: 99%