No abstract
This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the Kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data.
The support vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights, and threshold that minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by X-means clustering, and the weights are computed using error backpropagation. We consider three machines, namely, a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the United States postal service database of handwritten digits, the SV machine achieves the highest recognition accuracy, followed by the hybrid system. The SV approach is thus not only theoretically well-founded but also superior in a practical application
Developed only recently, support vector learning machines achieve high generalization ability by minimizing a bound on the expected test error; however, so far there existed no way of adding knowledge about invariances of a classification problem at hand. We present a method of incorporating prior knowledge about transformation invariances by applying transformations to support vectors, the training examples most critical for determining the classification boundary
Abstract. Two view-based object recognition algorithms are compared:(1) a heuristic algorithm based on oriented lters, and (2) a support vector learning machine trained on low-resolution images of the objects. Classi cation performance is assessed using a high number of images generated by a computer graphics system under precisely controlled conditions. Training-and test-images show a set of 25 realistic threedimensional models of chairs from viewing directions spread over the upper half of the viewing sphere. The percentage of correct identi cation of all 25 objects is measured.in: Proceedings ICANN'96 | International Conference on Artificial Neural Networks. Springer Verlag, Berlin, 1996 In computer vision, view{based models of object recognition have become more and more in uential in recent y ears. Moreover, psychophysical evidence has been found for a view{based representation of objects in humans (B ultho and Edelman, 1992). Unlike viewpoint-invariant representations using structural descriptions (e.g. Marr and Nishihara, 1978), viewpoint-dependent models do not require a three-dimensional representation (Poggio, Edelman 1990, Lades et.al., 1993. The present study compares two recognition algorithms that are explained in the following sections. Recognition by Oriented FiltersIf a three-dimensional object is rotated about a frontoparallel axis, orthographic projections of surface points will move in the image plane in a direction perpendicular to the axis. To a great extent this also applies to perspective projection under realistic viewing conditions. Thus, images of an object can be made insensitive to rotations about a particular frontoparallel axis by l o wpass ltering in one direction.In order to compensate for relatively large displacements, the lowpass lter operation extinguishes much of the high spatial frequency structure in one direction. Due to a centering process described below, the lowpass ltering has to account also for displacement components along the axis of rotation. As a consequence, performance cannot be improved signi cantly by c hoosing image resolutions higher than 16x16 pixels. In order to retain some of the high spatial frequency information from the initial image, the representation also contains images with an edge detection performed before downsampling.The algorithm uses a set of stored views of each object. They are preprocessed and stored in a representation of low resolution. To classify a test image, it is
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.