2016 International Conference on Image and Vision Computing New Zealand (IVCNZ) 2016
DOI: 10.1109/ivcnz.2016.7804435
|View full text |Cite
|
Sign up to set email alerts
|

Estimating heading direction from monocular video sequences using biologically-based sensors

Abstract: The determination of one's movement through the environment (visual odometry or self-motion estimation) from monocular sources such as video is an important research problem because of its relevance to robotics and autonomous vehicles. The traditional computer vision approach to this problem tracks visual features across frames in order to obtain 2-D image motion estimates from which the camera motion can be derived. We present an alternative scheme which uses the properties of motion sensitive cells in the pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 34 publications
0
3
0
Order By: Relevance
“…Our heading estimation units are very tolerant of noise in the flow field vector directions. As long as there are a sufficient number of vectors distributed across the field and the edge orientations causing the aperture problem are randomly distributed around the radial direction out from the putative FOE locations, the heading can still be estimated accurately [4,27]. Once the FOE has been determined, the true direction of the image motion is constrained to lie along the radial direction (α) of a line joining the derived FOE location to the vector location.…”
Section: Heading Estimation and Depth Extractionmentioning
confidence: 99%
“…Our heading estimation units are very tolerant of noise in the flow field vector directions. As long as there are a sufficient number of vectors distributed across the field and the edge orientations causing the aperture problem are randomly distributed around the radial direction out from the putative FOE locations, the heading can still be estimated accurately [4,27]. Once the FOE has been determined, the true direction of the image motion is constrained to lie along the radial direction (α) of a line joining the derived FOE location to the vector location.…”
Section: Heading Estimation and Depth Extractionmentioning
confidence: 99%
“…In all tests reported in this paper, the curvilinear rotation detected was 0 • s −1 and so no rotation compensation was applied. The rotation free flow field is then used as input (step 4) to our heading detector stage [18], [20]. The heading detectors are designed to find the 'Focus of Expansion' (FOE) which is the point in the image out from which all the vectors radiate.…”
Section: Model Descriptionmentioning
confidence: 99%
“…We have now overcome both of these problems and have developed a system (based on the known properties of cells in the primate visual system) for measuring image motion [10] and for obtaining a pure translation flow field from a combined T+R field [16], [17]. The image velocity estimation stage has been described in detail previously [10] and an overview can be found in [18]. One of us recently also described a technique for measuring and removing the R component of the motion during movement along curvilinear paths [16].…”
Section: Introductionmentioning
confidence: 99%