2011
DOI: 10.2316/journal.206.2011.2.206-3409
|View full text |Cite
|
Sign up to set email alerts
|

Range Identification for Perspective Vision Systems: A Position-Based Approach

Abstract: In this paper, a new estimator using a single moving calibrated camera is developed to asymptotically recover the range (depth) and the 3D Euclidean position of a static object feature. The position and the orientation of the camera is assumed to be measurable unlike in existing observers where velocity measurements are assumed to be known. To estimate the unknown depth along with the 3D coordinate of a feature an adaptive least squares estimation strategy is employed based on a novel prediction error formulat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 37 publications
0
5
0
Order By: Relevance
“…To proceed with the error development, the term is multiplied to the both sides of (8) that results in the following expression: (14) The estimate of (14) can be written as follows: (15) where and are the estimates of and , respectively, and denote and , respectively, where are the estimates of and , respectively. To facilitate the development, the terms and are added and subtracted from the right-hand side and the left-hand side of (14), respectively, that yields the following expression: (16) After subtracting (15) from (16), the following expression is obtained: (17) where is the prediction error for the th feature point , is the structure estimation error, and denotes , where . After adding and subtracting the term to the left-hand side of (17) and simplifying, the following expression can be obtained: (18) where denotes and .…”
Section: A Prediction Error Formulationmentioning
confidence: 99%
See 3 more Smart Citations
“…To proceed with the error development, the term is multiplied to the both sides of (8) that results in the following expression: (14) The estimate of (14) can be written as follows: (15) where and are the estimates of and , respectively, and denote and , respectively, where are the estimates of and , respectively. To facilitate the development, the terms and are added and subtracted from the right-hand side and the left-hand side of (14), respectively, that yields the following expression: (16) After subtracting (15) from (16), the following expression is obtained: (17) where is the prediction error for the th feature point , is the structure estimation error, and denotes , where . After adding and subtracting the term to the left-hand side of (17) and simplifying, the following expression can be obtained: (18) where denotes and .…”
Section: A Prediction Error Formulationmentioning
confidence: 99%
“…To facilitate the development, the terms and are added and subtracted from the right-hand side and the left-hand side of (14), respectively, that yields the following expression: (16) After subtracting (15) from (16), the following expression is obtained: (17) where is the prediction error for the th feature point , is the structure estimation error, and denotes , where . After adding and subtracting the term to the left-hand side of (17) and simplifying, the following expression can be obtained: (18) where denotes and . To ease the subsequent analysis, we combine these individual vectors to obtain their respective compact forms.…”
Section: A Prediction Error Formulationmentioning
confidence: 99%
See 2 more Smart Citations
“…doi:10.1016/j.automatica.2010.03.017 identify the range when the motion parameters were known (Chen & Kano, 2002Dixon, Fang, Dawson, & Flynn, 2003;Jankovic & Ghosh, 1995;Karagiannis & Astolfi, 2005;Ma, Chen, & Moore, 2004). More recently, in Nath, Braganza, and Dawson (2008a), the measurement of camera position was utilized to develop an adaptive estimator to recover the structure; this was extended in Nath, Braganza, Dawson, and Burg (2008b) to recover the range.…”
Section: Introductionmentioning
confidence: 99%