Abstract-This paper presents two new, efficient solutions to the two-view, relative pose problem from three image point correspondences and one common reference direction. This three-plus-one problem can be used either as a substitute for the classic five-point algorithm, using a vanishing point for the reference direction, or to make use of an inertial measurement unit commonly available on robots and mobile devices where the gravity vector becomes the reference direction. We provide a simple, closed-form solution and a solution based on algebraic geometry which offers numerical advantages. In addition, we introduce a new method for computing visual odometry with RANSAC and four point correspondences per hypothesis. In a set of real experiments, we demonstrate the power of our approach by comparing it to the five-point method in a hypothesizeand-test visual odometry setting.
In this paper, we reexamine the problem of general higher-order unification and develop an approach based on the method of transformations on systems of terms which has its roots in Herbrand's thesis, and which was developed by Martelli and Montanari in the context of first-order unification. This method provides an abstract and mathematically elegant means of analyzing the invariant properties of unification in various settings by providing a clean separation of the logical issues from the specification of procedural information. Our major contribution is three-fold. First, we have extended the Herbrand-Martelli-Montanari method of transformations on systems to higher-order unification and pre-unification; second, we have used this formalism to provide a more direct proof of the completeness of a method for higher-order unification than has previously been available; and, finally, we have shown the completeness of the strategy of eager variable elimination. In addition, this analysis provides another justification of the design of Huet's procedure, and shows how its basic principles work in a more general setting. Finally, it is hoped that this presentation might form a good introduction to higher-order unification for those readers unfamiliar with the field. Abstract: In this paper, we reexamine the problem of general higher-order unification and develop an approach based on the method of transformations on systems of terms which has its roots in Herbrand's thesis, and which was developed by Martelli and Montanari in the context of first-order unification. This method provides an abstract and mathematically elegant means of analyzing the invariant properties of unification in various settings by providing a clean separation of the logical issues from the specification of procedural information. Our major contribution is three-fold. First, we have extended the Herbrand-Martelli-Mont anari method of transformations on systems to higher-order unification and pre-unification; second, we have used this formalism to provide a more direct proof of the completeness of a method for higher-order unification than has previously been available; and, finally, we have shown the completeness of the strategy of eager variable elimination. In addition, this analysis provides another justification of the design of Huet's procedure, and shows how its basic principles work in a more general setting. Finally, it is hoped that this presentation might form a good introduction to higher-order unification for those readers unfamiliar with the field.Thus the interesting issue is in finding natural sets of transformations which present in an abstract form the fundamental operations of unification, but which are complete in this sense. In order to introduce the notion of higher-order unification, we shall first demonstrate the full method in the first-order case, and then sketch what changes need to be made to deal with higher-order terms. This will hopefully provide the necessary intuition for the more detailed treatment in the remainder...
PrefaceThis book is an introduction to fundamental geometric concepts and tools needed for solving problems of a geometric nature with a computer. Our main goal is to present a collection of tools that can be used to solve problems in computer vision, robotics, machine learning, computer graphics, and geometric modeling.During the ten years following the publication of the first edition of this book, optimization techniques have made a huge comeback, especially in the fields of computer vision and machine learning. In particular, convex optimization and its special incarnation, semidefinite programming (SDP), are now widely used techniques in computer vision and machine learning, as one may verify by looking at the proceedings of any conference in these fields. Therefore, we felt that it would be useful to include some material (especially on convex geometry) to prepare the reader for more comprehensive expositions of convex optimization, such as Boyd and Vandenberghe [2], a masterly and encyclopedic account of the subject. In particular, we added Chapter 7, which covers separating and supporting hyperplanes.We also realized that the importance of the SVD (singular value decomposition) and of the pseudo-inverse had not been sufficiently stressed in the first edition of this book, and we rectified this situation in the second edition. In particular, we added sections on PCA (principal component analysis) and on best affine approximations and showed how they are efficienlty computed using SVD. We also added a section on quadratic optimization and a section on the Schur complement, showing the usefulness of the pseudo-inverse.In this second edition, many typos and small mistakes have been corrected, some proofs have been shortened, some problems have been added, and some references have been added. Here is a list containing brief descriptions of the chapters that have been modified or added.• Chapter 3, on the basic properties of convex sets, has been expanded. In particular, we state a version of Carathéodory's theorem for convex cones (Theorem 3.2), a version of Radon's theorem for pointed cones (Theorem 3.6), and Tverberg's theorem (Theorem 3.7), and we define centerpoints and prove their existence (Theorem 3.9). 3), and we prove rigorously how SVD yields PCA (Theorem 14.3), using the Rayleigh-Ritz ratio (Lemma 14.2). In Section 14.4, it is shown how to best approximate a set of data with an affine subspace in the least squares sense. Again, SVD can used to find solutions.• Chapter 15 is new, except for Section 15.1, which reproduces Section 13.2 from the first edition of this book. We added the definition of the positive semidefinite cone ordering, , on symmetric matrices, since it is extensively used in convex optimization. In Section 15.2, we find a necessary and sufficient condition (Proposition 15.2) for the quadratic function f (x) = 1 2 x Ax + x b to have a minimum in terms of the pseudo-inverse of A (where A is a symmetric matrix). We also show how to accommodate linear constraints of the form C x = 0 or...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.