Localizing set of features (with known coordinates) on the ground and finding their matches in the image taken by imaging sensor on the aerial vehicle is the basic concept behind Vision Based Navigation (VBN). The number of matching points necessary for solving the collinearity equation is a critical factor to be investigated while using the VBN approach for navigation. Although a robust scale and rotation invariant image matching algorithm is important for VBN of aerial vehicles, the proper estimation of the collinearity equation object space transformation parameters improves the efficiency of the navigation process through the real-time estimation of transformation parameters. These parameters can then be used in aiding the inertial measurements data in the navigation estimation filter. The main objective of this paper is to investigate the estimation of the object space transformation parameters necessary for VBN of aerial vehicles with the assumption that the aerial vehicle experiences large values of the rotational angles, which will lead to non-linearity of the estimation model. In this case, traditional least squares approaches will fail or will take longer to estimate the object space transformation parameters, because of the expected non-linearity of the mathematical model. Five different nonlinear optimization methods are presented for estimating the transformation parameters – these include four gradient based nonlinear optimization methods; Trust region, Trust region dogleg algorithm, Levenberg-Marquardt, and Quasi-Newton line search method and one non-gradient method; Nelder-Mead simplex direct search is employed for the six transformation parameters estimation process. Assessments of the proposed nonlinear optimization approaches on the image matching algorithm necessary for the VBN approach are investigated.