The Problem Given six or more pairs of corresponding points on two calibrated images, the accurate estimation of the essential matrix (EsM), which is a 3 × 3 matrix capturing the relative translation t t t and rotation R separating the two pinhole cameras, requires solving a nonlinear optimization problem subject to a set of constraints that guarantee the resulting 3×3 matrix has the structure of a valid EsM (i.e. E = [t t t] x R, or equivalently svd(E) = U diag(1, 1, 0)V , or equivalently E EE = 0.5 tr(E E)E ). To the best of our knowledge, all existing schemes enforce the EsM constraints by performing the optimization on the manifold E of EsMs using either global [2] or local parametrizations [3]. No attempts were made to use the more straightforward approach of integrating the EsM constraint E EE = 0.5 tr(E E)E directly into the optimization possibly because this 3×3 matrix equation as well as the homogeneity property of the EsM (i.e. E and cE represent the same EsM for all c = 0) give a total of ten (nonlinearly dependent) constraints while the number of variables in a 3 × 3 matrix is only nine.Idea To avoid this problem, we propose to use adaptive penalty methods [1] to incorporate the matrix constraint into the optimization. Penalty methods relax the constraints (and so do not suffer from the too-manyconstraints problem) while making violating them expensive. Assuming that f (e e e) is the cost function measuring the (robust) algebraic or geometric fitting error of the 9-vector e e e corresponding to E and h h h 2 (e e e) = vec{E EE − 0.5 tr(E E)E } is the EsM constraint function, we define the penalty-augmented cost function f c (e e e) = f (e e e) + 0.5c||h h h 2 (e e e)|| 2 where c > 0 is called the penalty parameter. The two functions f (e e e) and f c (e e e) are equal iff e e e ∈ E. Otherwise, f c (e e e) > f (e e e). Ideally, one would set c to a very high number or ∞ so that the minimizers of the original and penaltyaugmented problems coincide. Such a strategy would fail to locate the (local) minimum precisely due to finite machine precision. Instead, we repeatedly compute the minimum of f c for a gradually increasing sequence {c k } and we use the minimizer of f c k as an initial guess for the minimizer of f c k+1 . If at iteration k the current estimate of the EsM is e e e k , we compute the update δ δ δ k ∈ R 9 on e e e k by solving the following optimization problem:subject to e e e k δ δ δ k = 0 (to ensure e e e k+1 stays away from zero). (2) Solution Procedure Here we use the popular Gauss-Newton iteration to solve the above problem. In particular, we build a convex quadratic program (QP) approximation to the above problem by (a) replacing f with a convex second-order Taylor approximation 0.5δ δ δ k H f (e e e k )δ δ δ k +∇ ∇ ∇ f (e e e k )δ δ δ k + f (e e e k ) and (b) replacing h h h 2 (e e e k + δ δ δ k ) with a linear Taylor approximationThe resulting QP is given by:subject to e e e k δ δ δ k = 0.where H k f = H f (e e e k ) and ∇ ∇ ∇ f k = ∇ ∇ ∇ f (e e e k ). Introducing a scalar Lagrang...