CVPR 2011 2011
DOI: 10.1109/cvpr.2011.5995476
|View full text |Cite
|
Sign up to set email alerts
|

Mining discriminative co-occurrence patterns for visual recognition

Abstract: The

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
55
0

Year Published

2012
2012
2016
2016

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 63 publications
(56 citation statements)
references
References 28 publications
1
55
0
Order By: Relevance
“…Since the latter implies to test all keypoints in a one-vs-rest fashion, they test only on those frequent keypoints in the image dataset. In other works, Yuan et al 22 , 23 took advantage of the use of k-nearest neighbors algorithm to group visual words and building visual phrases of different lengths in order to get relevant information. In video data mining, visual phrases have also been used for obtaining the principal objects and characters in a video by clustering on viewpoint invariant configurations 13 .…”
Section: Related Workmentioning
confidence: 99%
“…Since the latter implies to test all keypoints in a one-vs-rest fashion, they test only on those frequent keypoints in the image dataset. In other works, Yuan et al 22 , 23 took advantage of the use of k-nearest neighbors algorithm to group visual words and building visual phrases of different lengths in order to get relevant information. In video data mining, visual phrases have also been used for obtaining the principal objects and characters in a video by clustering on viewpoint invariant configurations 13 .…”
Section: Related Workmentioning
confidence: 99%
“…[11,12,22,23,25,29,33]). These methods differ in the way they transform images into sets of items which can be mined out.…”
Section: Related Workmentioning
confidence: 99%
“…Reducing the size of the vocabulary is not a good option since vocabulary size is positively correlated with good performance. Differently, [32,33] mine patterns from high-level semantic features for which smaller representational space are needed. However, such techniques can work only if the high-level features are correctly detected, which is an open question.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Method Accuracy(%) GIST-color [19] 69.50 RBoW [21] 78.60 ± 0.70 Classemes [29] 80.60 Object Bank [15] 80.90 SP [12] 81.40 SPMSM [11] 82.30 LCSR [26] 82.67 ± 0.51 SP-pLSA [1] 83.70 CENTRIST [33] 83.88 ± 0.76 HIK [32] 84.12 ± 0.52 VC + VQ [17] 85.40 LMLF [2] 85.60 ± 0.20 LPR [25] 85.81 Hybrid-Parts + GIST-color + SP [37] 86.30 CENTRIST + LLC + Boosting [36] 87.80 RSP [7] 88.10 LScSPM [6] 89.75 ± 0.50 ISPR(our approach)…”
Section: -Scene Datasetmentioning
confidence: 99%