2018
DOI: 10.1186/s13640-018-0292-8
|View full text |Cite
|
Sign up to set email alerts
|

Active stereo platform: online epipolar geometry update

Abstract: This paper presents a novel method to update a variable epipolar geometry platform directly from the motor encoder based on mapping the motor encoder angle to the image space angle, avoiding the use of feature detection algorithms. First, an offline calibration is performed to establish a relationship between the image space and the hardware space. Second, a transformation matrix is generated using the results from this mapping. The transformation matrix uses the updated epipolar geometry of the platform to re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 35 publications
0
4
0
Order By: Relevance
“…Many applications developed stereo vision systems to determine the position of objects for robot grasping [43][44][45][46][47][48]. Chen et al [49] develop the picking robot system based on a Fuzzy Neural Network Sliding Mode Algorithms.…”
Section: IImentioning
confidence: 99%
“…Many applications developed stereo vision systems to determine the position of objects for robot grasping [43][44][45][46][47][48]. Chen et al [49] develop the picking robot system based on a Fuzzy Neural Network Sliding Mode Algorithms.…”
Section: IImentioning
confidence: 99%
“…We can achieve this by setting the 2D sensors values (calibration matrix, distortion factor) and "mutual spacing and 2D sensors orientation". Furthermore, epipolar geometry (which addresses two sensor issues) is needed for the statement of the number of 3D points that we can estimate based on the positions at 2D sensors [20]. During the determination of the search points m = [x , y , 1] using the obtained picture, the following equation applies:…”
Section: D Sensorsmentioning
confidence: 99%
“…The SVS obtains 3-D information from two images captured from two different cameras separated by a known distance. Similar design of SVS can be found in the literature, [27][28][29] The developed computer program for 3-D point localization using SVS can be divided into five steps: images capture, camera calibration, pattern match, computing pixel coordinates to angles, and triangulation. Figure 4 shows the localization of a 3-D point in the scene using the developed SVS.…”
Section: Svs Implementationmentioning
confidence: 99%