2018
DOI: 10.1109/tip.2018.2830127
|View full text |Cite
|
Sign up to set email alerts
|

The Visual Word Booster: A Spatial Layout of Words Descriptor Exploiting Contour Cues

Abstract: Although researchers have made efforts to use the spatial information of visual words to obtain better image representations, none of the studies take contour cues into account. Meanwhile, it has been shown that contour cues are important to the perception of imagery in the literature. Inspired by these studies, we propose to use the Spatial Layout of Words (SLoW) to boost visual word based image descriptors by exploiting contour cues. Essentially, the SLoW descriptor utilises contours and incorporates differe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 69 publications
0
6
0
Order By: Relevance
“…Extracting contour maps Previous study on visual perception [23], [24], [30] has shown that edge information plays an important role in the estimation of texture similarity. In order to learn the perceptual similarity between the two texture images, we therefore propose to use the contour maps of the texture images as auxiliary inputs for prediction.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Extracting contour maps Previous study on visual perception [23], [24], [30] has shown that edge information plays an important role in the estimation of texture similarity. In order to learn the perceptual similarity between the two texture images, we therefore propose to use the contour maps of the texture images as auxiliary inputs for prediction.…”
Section: Methodsmentioning
confidence: 99%
“…In [24], Dong and Chantler observed that contour maps provide better texture representation than other types of local texture characteristics. They attributed this success to the long-range interactions between local image characteristics encoded by contours [23], [30].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Pre-trained CNNs can also be fine-tuned using a specific small image dataset [15]. On the other hand, Cimpoi et al [15] and Dong and Dong [34] used the features extracted at the convolutional layer of a pretrained classification CNN to learn visual words for image representation. In particular, it has been shown that the features extracted at the convolutional layer are less dependent of the dataset than those extracted at the fully-connected layer.…”
Section: Image Classification Using Pre-trained Cnnsmentioning
confidence: 99%
“…We did not tune the parameters of the linear SVM. The C value was set to 10 as performed in earlier studies [15], [34].…”
Section: A Implementation Detailsmentioning
confidence: 99%