1993
DOI: 10.1109/34.221167
|View full text |Cite
|
Sign up to set email alerts
|

3-D translational motion and structure from binocular image flows

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

1994
1994
2012
2012

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(16 citation statements)
references
References 8 publications
0
16
0
Order By: Relevance
“…If the motion parameters are to be obtained using just two image patterns, an additional constraint must be introduced. Li and Duncan [20] proposed that a suitable additional constraint could be obtained simply by multiplying one of the two optical equations by the y-coordinate value in order to generate the following 3x3 image matrix:…”
Section: Limitations Of Binocular Stereo Image Visioning Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…If the motion parameters are to be obtained using just two image patterns, an additional constraint must be introduced. Li and Duncan [20] proposed that a suitable additional constraint could be obtained simply by multiplying one of the two optical equations by the y-coordinate value in order to generate the following 3x3 image matrix:…”
Section: Limitations Of Binocular Stereo Image Visioning Systemsmentioning
confidence: 99%
“…However, the authors neither addressed the behavior of translational motion nor explored the potential sources of navigational error. Li and Duncan [20] used the image flow fields captured by two parallel stereo cameras to determine the 3D translational motion parameters with respect to various objects in the viewing area and to establish the correspondence between equivalent features in the left and right images. However, since the disparity between the left and right images in a binocular system is insufficient to fully determine the translational motion, the authors were obliged to introduce an additional geometrical constraint, namely multiplying one of the two optical equations by the y-coordinate value.…”
Section: Introductionmentioning
confidence: 99%
“…In the earlier works of motion-stereo integration, no matter it is the mere juxtaposition of the results from independent processing of the motion and stereo information (Ayache and Faugeras, 1989;Grosso et al, 1989;Kriegman et al, 1989), where the final estimates of structure were based on some combination of the outputs of these separate processes (coupled loosely together sensus (Clark and Yuille, 1994)), or the tightly coupled approach where the processing of one type of visual information may depend on the presence of another (Balasubramanyam and Snyder, 1991;Li and Duncan, 1993;Shi et al, 1994;Waxman and Duncan, 1993;Zhang and Negahdaripour, 2008), the all but universal assumption is that the overlap in the visual field is used for computing binocular disparity. This assumption remains true in the later approaches with the advent of more sophisticated techniques such as PDE (Strecha and Gool, 2002), variational approach (Huguet and Devernay, 2007;Pons et al, 2007;Williams et al, 2005)and factorization (Ho and Chung, 2000).…”
Section: Literature Reviewmentioning
confidence: 99%
“…In general, it is possible to make a distinction between the approaches where the results of stereo and motion analysis are considered separately and the rather different approach based upon more integrated relations. Within this group falls the work reported in [18], [19] where the temporal derivative of disparity is exploited, and the dynamic stereo approach [20], [21] considered in this paper.…”
Section: Introductionmentioning
confidence: 99%