2013
DOI: 10.1109/tip.2013.2262289
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Training Collections for Image Annotation: An Instance-Weighted Mixture Modeling Approach

Abstract: Tagged Web images provide an abundance of labeled training examples for visual concept learning. However, the performance of automatic training data selection is susceptible to highly inaccurate tags and atypical images. Consequently, manually curated training data sets are still a preferred choice for many image annotation systems. This paper introduces ARTEMIS - a scheme to enhance automatic selection of training images using an instance-weighted mixture modeling framework. An optimization algorithm is deriv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 51 publications
0
7
0
Order By: Relevance
“…Hence, we firstly smooth the loss and then use Nesterov's optimal gradient method [12] to solve (5). According to [12], the smoothed version of the hinge loss g(h k , y k , θ) = max{0, −y k (1 − θ T h k )} can be given by…”
Section: B Optimization Proceduresmentioning
confidence: 99%
See 4 more Smart Citations
“…Hence, we firstly smooth the loss and then use Nesterov's optimal gradient method [12] to solve (5). According to [12], the smoothed version of the hinge loss g(h k , y k , θ) = max{0, −y k (1 − θ T h k )} can be given by…”
Section: B Optimization Proceduresmentioning
confidence: 99%
“…The loss function Φ(θ) is nondifferentiable. Hence, we firstly smooth the loss and then use Nesterov's optimal gradient method [12] to solve (5). According to [12], the smoothed version of the hinge loss…”
Section: B Optimization Proceduresmentioning
confidence: 99%
See 3 more Smart Citations