2007 IEEE Conference on Computer Vision and Pattern Recognition 2007
DOI: 10.1109/cvpr.2007.383222
|View full text |Cite
|
Sign up to set email alerts
|

Discovery of Collocation Patterns: from Visual Words to Visual Phrases

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
149
0
3

Year Published

2010
2010
2021
2021

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 212 publications
(152 citation statements)
references
References 20 publications
0
149
0
3
Order By: Relevance
“…Frequent pattern mining techniques have been used in computer vision problems, including image classification [2,13,14], object recognition and object-part recognition [12]. These methods used different image representation, the way they convert image representation into transactional description which is suitable for pattern mining techniques and selects relevant and discriminative patterns.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Frequent pattern mining techniques have been used in computer vision problems, including image classification [2,13,14], object recognition and object-part recognition [12]. These methods used different image representation, the way they convert image representation into transactional description which is suitable for pattern mining techniques and selects relevant and discriminative patterns.…”
Section: Related Workmentioning
confidence: 99%
“…For applying FIM technique to image classification these bags (histogram) need to be converted into sets of items known as transactions using Bag-to-set (B2S) [2] method. This is done by considering each visual word as an item [13].…”
Section: Introductionmentioning
confidence: 99%
“…Yuan et al [36] have proposed another higher-level lexicon, i.e. visual phrase lexicon, where a visual phrase is a spatially co-occurrent pattern of visual words.…”
Section: Analogy Between Information Retrieval and Cbirmentioning
confidence: 99%
“…The alternative to the costly RANSAC verification is to inject geometric information directly into the retrieval procedure, by either spatially aggregating the local descriptors in a predefined [6] or adaptively selected [13] set of regions, or by capturing word cooccurrences into visual phrases, which correspond to higher-level visual information, either at the level of an entire image [14], or on local neighbourhoods [15,16]. By attaching additional geometric information to the visual words, schemes that deal with similarity transformations (translation, scale) in the image space have been designed [17,18]; addressing more complex transformations (e.g.…”
Section: Introductionmentioning
confidence: 99%