We introduce a new descriptor for images which allows the construction of efficient and compact classifiers with good accuracy on object category recognition. The descriptor is the output of a large number of weakly trained object category classifiers on the image. The trained categories are selected from an ontology of visual concepts, but the intention is not to encode an explicit decomposition of the scene. Rather, we accept that existing object category classifiers often encode not the category per se but ancillary image characteristics; and that these ancillary characteristics can combine to represent visual classes unrelated to the constituent categories' semantic meanings.The advantage of this descriptor is that it allows object-category queries to be made against image databases using efficient classifiers (efficient at test time) such as linear support vector machines, and allows these queries to be for novel categories. Even when the representation is reduced to 200 bytes per image, classification accuracy on object category recognition is comparable with the state of the art (36% versus 42%), but at orders of magnitude lower computational cost.
We s h o w h o w high-level scene properties can be inferred from classi cation of low-level image features, speci cally for the indoor-outdoor scene retrieval problem. We systematically studied the features: (1) histograms in the Ohta color space (2) multiresolution, simultaneous autoregressive model parameters (3) coefcients of a shift-invariant DCT. We d emonstrate that performance is improved by computing features on subblocks, classifying these subblocks, and then combining these results in a way r eminiscent o f \ s t a c king." State of the art single-feature methods are shown to result in about 75{86% performance, while the new method results in 90.3% correct classi cation, when evaluated on a diverse database of over 1300 consumer images provided by K o d a k .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.