2012
DOI: 10.1007/978-3-642-33783-3_7
|View full text |Cite
|
Sign up to set email alerts
|

Random Forest for Image Annotation

Abstract: In this paper, we present a novel method for image annotation and made three contributions. Firstly, we propose to use the tags contained in the training images as the supervising information to guide the generation of random trees, thus enabling the retrieved nearest neighbor images not only visually alike but also semantically related. Secondly, different from conventional decision tree methods, which fuse the information contained at each leaf node individually, our method treats the random forest as a whol… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 54 publications
(24 citation statements)
references
References 32 publications
0
24
0
Order By: Relevance
“…It can be seen that our method consistently improves performance over the conventional one-vs-rest SVM. Also, it performs comparable or better than even the recently proposed annotation methods such as [6,8,23] (except for IAPRTC-12 dataset where its performance is inferior only to the best results of [8]). We also compare with two SVM-based models [1,9].…”
Section: Discussionmentioning
confidence: 83%
See 2 more Smart Citations
“…It can be seen that our method consistently improves performance over the conventional one-vs-rest SVM. Also, it performs comparable or better than even the recently proposed annotation methods such as [6,8,23] (except for IAPRTC-12 dataset where its performance is inferior only to the best results of [8]). We also compare with two SVM-based models [1,9].…”
Section: Discussionmentioning
confidence: 83%
“…We use the same evaluation criteria as being used by previous methods [6,8,11,19,23]. Given a new sample, first we compute the score for each label using the corresponding classifier, and then assign it the five top-scoring labels.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…As noted by [9,3,38], [15]'s training complexity is quadratic, O(n 2 ), where n is the number of training images. Since it relies on sophisticated training procedures and per tag optimizations, it is not scalable on large datasets.…”
Section: Computational Complexitymentioning
confidence: 99%
“…Thus each image is annotated with the n most relevant labels (usually, as in this paper, the results are obtained using n = 5). Then, the results are reported as mean precision P and mean recall R over the Previously reported results ML CRM [14] InfNet [19] NPDE [27] MBRM [4] SML [2] TGLM [17] GS [28] JEC-15 [9] TagProp σRK [9] TagProp σSD [9] RF-opt [5] KSVM-VT [26] 2PKNN [25] TagProp σML [9] 2PKNN ML [25] Our best result ground-truth labels; N+ is often used to denote the number of labels with non-zero recall value. Note that each image is forced to be annotated with n labels, even if the image has fewer or more labels in the ground truth.…”
Section: Evaluation Measuresmentioning
confidence: 99%