2010
DOI: 10.1016/j.patcog.2009.12.015
|View full text |Cite
|
Sign up to set email alerts
|

Recursive estimation of motion and a scene model with a two-camera system of divergent view

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
10
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 43 publications
0
10
0
Order By: Relevance
“…Often, these individual cameras (of the multi-camera system) are arranged in a way so that they have minimum (or zero) overlapping field-of-views (FOVs), in order to give the vehicle a wider combined FOV coverage for better surrounding perception. Previous studies have shown that a wider FOV improves camera motion estimation accuracy [4]. Having a wider FOV not only provides the benefit of better accuracy, but also offers an effective way to distinguish inliers from outliers; and our paper is capitalised on the second point.…”
Section: Arxiv:160503689v1 [Csro] 12 May 2016mentioning
confidence: 99%
See 1 more Smart Citation
“…Often, these individual cameras (of the multi-camera system) are arranged in a way so that they have minimum (or zero) overlapping field-of-views (FOVs), in order to give the vehicle a wider combined FOV coverage for better surrounding perception. Previous studies have shown that a wider FOV improves camera motion estimation accuracy [4]. Having a wider FOV not only provides the benefit of better accuracy, but also offers an effective way to distinguish inliers from outliers; and our paper is capitalised on the second point.…”
Section: Arxiv:160503689v1 [Csro] 12 May 2016mentioning
confidence: 99%
“…#include <math.h> #include <complex> transformations_t fourpt( const RelativeAdapterBase & adapter, const Indices & indices ){ MatrixXd A(4,8); Matrix3d rotation = adapter.getR12(); for( size_t i = 0; i < numberCorrespondences; i++ ){ bearingVector_t d1 = adapter.getBearingVector1(indices[i]); bearingVector_t d2 = adapter.getBearingVector2(indices[i]); translation_t v1 = adapter.getCamOffset1(indices[i]); translation_t v2 = adapter.getCamOffset2(indices[i]); rotation_t R11 = adapter.getCamRotation1(indices[i]); rotation_t R21 = adapter.getCamRotation2(indices[i]); d1 = R11*d1; d2 = R21*d2; Eigen::Matrix<double,6,1> l1,l2_pr; l1.block<3,1>(0,0) = d1; l1.block<3,1>(3,0) = v1.cross(d1); l2_pr.block<3,1>(0,0) = rotation*d2; l2_pr.block<3,1>(3,0) = rotation*(v2.cross(d2)); A(i,0) = l1(0)*l2_pr(3) + l1(1)*l2_pr(4) + l1(2)*l2_pr(5) + l1(3)*l2_pr(0) + l1(4)*l2_pr(1) + l1(5)*l2_pr(2); A(i,1) = l1(2)*l2_pr(1) -l1(1)*l2_pr(2); A(i,2) = l1(0)*l2_pr(2) -l1(2)*l2_pr(0); A(i,3) = l1(1)*l2_pr(0) -l1(0)*l2_pr(1); A(i,4) = l1(1)*l2_pr(3) -l1(3)*l2_pr(1) + l1(4)*l2_pr(0) -l1(0)*l2_pr(4); A(i,5) = l1(2)*l2_pr(0); A(i,6) = l1(2)*l2_pr(1); A(i,7) = -l1(0)*l2_pr(0) -l1(1)*l2_pr(1);} // Form a quartic equation of x^4+a3*x^3+a2*x^2+a1*x+a0=0 double a0 =(-A(0,0)*A(1,1)*A(2,2)*A(3,3)+A(0,0)*A(1,1)*A(2,3)*A(3,2)+A(0,0)*A(1,2)*A(2,1)*A(3,3)-A(0,0)*A(1,2)*A(2,3)*A(3,1)-A(0,0)* A(1,3)*A(2,1)*A(3,2)+A(0,0)*A(1,3)*A(2,2)*A(3,1)+A(0,1)*A(1,0)* A(2,2)*A(3,3)-A(0,1)*A(1,0)*A(2,3)*A(3,2)-A(0,1)*A(1,2)*A(2,0)* A(3,3)+A(0,1)*A(1,2)*A(2,3)*A(3,0)+A(0,1)*A(1,3)*A(2,0)*A(3,2)-A(0,1)*A(1,3)*A(2,2)*A(3,0)-A(0,2)*A(1,0)*A(2,1)*A(3,3)+A(0,2)* A(1,0)*A(2,3)*A(3,1)+A(0,2)*A(1,1)*A(2,0)*A(3,3)-A(0,2)*A(1,1)* A(2,3)*A(3,0)-A(0,2)*A(1,3)*A(2,0)*A(3,1)+A(0,2)*A(1,3)*A(2,1)* A(3,0)+A(0,3)*A(1,0)*A(2,1)*A(3,2)-A(0,3)*A(1,0)*A(2,2)*A(3,1)-A(0,3)*A(1,1)*A(2,0)*A(3,2)+A(0,3)*A(1,1)*A(2,2)*A(3,0)+A(0,3)* A(1,2)*A(2,0)*A(3,1)-A(0,3)*A(1,2)*A(2,1)*A(3,0))/(-A(0,4)*A(1,5) *A(2,6)*A(3,7)+A(0,4)*A(1,5)*A(2,7)*A(3,6)+A(0,4)*A(1,6)*A(2,5)* A(3,7)-A(0,4)*A(1,6)*A(2,7)*A(3,5)-A(0,4)*A(1,7)*A(2,5)*A(3,6)+ A(0,4)*A(1,7)*A(2,6)*A(3,5)+A(0,5)*A(1,4)*A(2,6)*A(3,7)-A(0,5)* A(1,4)*A(2,7)*A(3,6)-A(0,5)*A(1,6)*A(2,4)*A(3,7)+A(0,5)*A(1,6)* A(2,7)*A(3,4)+A(0,5)*A(1,7)*A(2,4)*A(3,6)-A(0,5)*A(1,7)*A(2,6)* A(3,4)-A(0,6)*A(1,4)*A(2,5)*A(3,7)+A(0,6)*A(1,4)*A(2,7)*A(3,5)+ A(0,6)*A(1,5)*A(2,4)*A(3,7)-A(0,6)*A(1,5)*A(2,7)*A(3,4)-A(0,6)* A(1,7)*A(2,4)*A(3,5)+A(0,6)*A(1,7)*A(2,5)*A(3,4)+A(0,7)*A(1,4)* A(2,5)*A(3,6)-A(0,7)*A(1,4)*A(2,6)*A(3,5)-A(0,7)*A(1,5)*A(2,4)* A(3,6)+A(0,7)*A(1,5)*A(2,6)*A(3,4)+A(0,7)*A(1,6)*A(2,4)*A(3,5)-A(0,7)*A(1,6)*A(2,5)*A(3,4)); double a1 = (-A(0,0)*A(1,1)*A(2,2)*A(3,7)+A(0,0)*A(1,1)*A(2,3)* A(3,6)-A(0,0)*A(1,1)*A(2,6)*A(3,3)+A(0,0)*A(1,1)*A(2,7)*A(3,2)+ A(0,0)*A(1,2)*A(2,1)*A(3,7)-A(0,0)*A(1,2)*A(2,3)*A(3,5)+A(0,0)* A(1,2)*A(2,5)*A(3,3)-A(0,0)*A(1,2)*A(2,7)*A(3,1)-A(0,0)*A(1,3)* A(2,1)*A(3,6)+A(0,0)*A(1,3)*A(2,2)*A(3,5)-A(0,0)*A(1,3)*A(2,5)* A(3,2)+A(0,0)*A(1,3)*A(2,6)*A(3,1)-A(0,0)*A(1,5)*A(2,2)*A(3,3)+ A(0,0)*A(1,5)*A(2,3)*A(3,2)+A(0,0)*A(1,6)*A(2,1)*A(3,3)-A(0,0)* A(1,6)*A(2,3)*A(3,1)-A(0,0)*A(1,7)*A(2,1)*A(3,2)+A(0,0)*A(1,7)* A(2,2)*A(3,1)+A(0,1)*A(1,0)*A(2,2)*A(3,7)-A(0,1)*A(1,0)*A(2,3)* A(3,6)+A(0,1)*A(1,0)*A(2,6)*A(3,3)-A(0,1)*A(1,0)*A(2,7)*A(3,2)-A(0,1)*A(1,2)*A(2,0)*A(3,7)+A(0,1)*A(1,2)*A(2,3)*A(3,4)-A(0,1)*A(1,2)*A(2,4)*A(3,3)+A(0,1)*A(1,2)*A(2,7)*A(3,0)+A(0,1)*A(1,3)* A(2,0)*A(3,6)-A(0,1)*A(1,3)*A(2,2)*A(3,4)+A(0,1)*A(1,3)*A(2,4)* A(3,2)-A(0,1)*A(1,3)*A(2,6)*A(3,0)+A(0,1)*A(1,4)*A(2,2)*A(3,3)-A(0,1)*A(1,4)*A(2,3)*A(3,2)-A(0,1)*A(1,6)*A(2,0)*A(3,3)+A(0,1)* A(1,6)*A(2,3)*A(3,0)+A(0,1)*A(1,7)*A(2,0)*A(3,2)-A(0,1)*A(1,7)* A(2,2)*A(3,0)-A(0,2)*A(1,0)*A(2,1)*A(3,7)+A(0,2)*A(1,0)*A(2,3)* A(3,5)-A(0,2)*A(1,0)*A(2,5)*A(3,3)+A(0,2)*A(1,0)*A(2,7)*A(3,1)+ A(0,2)*A(1,1)*A(2,0)*A(3,7)-A(0,2)*A(1,1)*A(2,3)*A(3,4)+A(0,2)* A(1,1)*A(2,4)*A(3,3)-A(0,2)*A(1,1)*A(2,7)*A(3,0)-A(0,2)...…”
mentioning
confidence: 99%
“…The team tactic patterns were used to train multilayer perceptrons [10]. Another study dealt with a neural network of motion perception and speed discrimination [11].There were also studies that used different types of cameras, as catadioptric [12] and multicamera systems [13,14]. A technique based on m-mediods was used in [15] to classify motion and anomaly detection.…”
Section: Introductionmentioning
confidence: 99%
“…Also, the multi-moving non-rigid objects problem is not addressed where several objects could be occluded in different depth levels [7]. The integration of the depth information provides accurate estimation for motion in the z direction even for static vision system which is not applicable in monocular systems [8], [9].…”
Section: Introductionmentioning
confidence: 99%