Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)
DOI: 10.1109/3dim.2005.49
|View full text |Cite
|
Sign up to set email alerts
|

Further Improving Geometric Fitting

Abstract: We give a formal definition of geometric fitting in a way that suits computer vision applications. We point out that the performance of geometric fitting should be evaluated in the limit of small noise rather

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(20 citation statements)
references
References 21 publications
0
20
0
Order By: Relevance
“…12), we must use an iterative method. Fortunately, a number of iterative AML methods have been developed [15,16,17] -all of which have been shown to converge very quickly in theory for data with small noise [15]. All of these methods are based on solving a similar eigenvalue problem to the direct methods (Sec.…”
Section: Iterative Aml Methods: Heiv and Reducedmentioning
confidence: 99%
“…12), we must use an iterative method. Fortunately, a number of iterative AML methods have been developed [15,16,17] -all of which have been shown to converge very quickly in theory for data with small noise [15]. All of these methods are based on solving a similar eigenvalue problem to the direct methods (Sec.…”
Section: Iterative Aml Methods: Heiv and Reducedmentioning
confidence: 99%
“…It can be shown that the covariance matrix of the resulting solutionû coincides with the the KCR lower bound (the right-hand side of eq. (8) except for O(σ 4 ) [1,8,9]. The fundamental matrix F should also satisfy the constraint det F = 0 [5].…”
Section: Maximum Likelihood Estimationmentioning
confidence: 99%
“…The solution is optimal in the sense that its covariance matrix agrees with the theoretical accuracy bound (KCR lower bound) except for higher order terms in noise [1,8]. Kanatani's renormalization [8] is also known to be nearly equivalent to FNS and HEIV [9]. In this paper, we add a fourth method: directly computing ML by Gauss-Newton iterations.…”
Section: Introductionmentioning
confidence: 99%
“…If ξ α is regarded as an independent Gaussian random variable of meanξ α and covariance matrix V [ξ α ], maximum likelihood (ML) estimation is to minimize the sum of the square Mahalanobis distances of the data points ξ α to the hyperplane to be fitted in R 9 , minimizing…”
Section: Maximum Likelihood Estimationmentioning
confidence: 99%