2006 IEEE International Conference on Multimedia and Expo 2006
DOI: 10.1109/icme.2006.262955
|View full text |Cite
|
Sign up to set email alerts
|

Using Semantic Features for Scene Classification: how Good do they Need to Be?

Abstract: Semantic scene classification is a useful, yet challenging problem in image understanding. Most existing systems are based on low-level features, such as color or texture, and succeed to some extent. Intuitively, semantic features, such as sky, water, or foliage, which can be detected automatically, should help close the so-called semantic gap and lead to higher scene classification accuracy. To answer the question of how accurate the detectors themselves need to be, we adopt a generally applicable scene class… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2007
2007
2016
2016

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 12 publications
(18 citation statements)
references
References 10 publications
0
18
0
Order By: Relevance
“…To obtain multiclass classification, we trained a SVM for each class to distinguish it from all others, and classified the image with the class whose SVM gave the maximum output. Further details are given in [3].…”
Section: Discriminative Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…To obtain multiclass classification, we trained a SVM for each class to distinguish it from all others, and classified the image with the class whose SVM gave the maximum output. Further details are given in [3].…”
Section: Discriminative Approachmentioning
confidence: 99%
“…The most effective of these models uses pairwise spatial relationships between regions. 3) In Section V, we compare this model with three other generative models: an exact model that models the full joint distribution of the scene type and every semantic region in the image, one that models co-occurrence of these regions while ignoring the actual spatial relations, and one that treats these regions independently. 4) Finally, we compare our model with a discriminative model that uses high-level features and with one that uses low-level features.…”
mentioning
confidence: 99%
“…As in [2], we convert the image to LST space and split the image into blocks formed by an NxN grid. We then compute the mean and variance of each block's color band.…”
Section: Raw Feature Extractionmentioning
confidence: 99%
“…Spatial color moments are a state-of-the-art feature used to distinguish outdoor scenes [2][10]; we use them as a baseline feature for comparison, even though color is expected to be more salient for outdoor scenes than indoor ones. As in [2], we convert the image to LST space and split the image into blocks formed by an NxN grid.…”
Section: Raw Feature Extractionmentioning
confidence: 99%
See 1 more Smart Citation