1995
DOI: 10.6028/nist.ir.5605
|View full text |Cite
|
Sign up to set email alerts
|

Real-time obstacle avoidance using central flow divergence and peripheral flow

Abstract: The lure of using motion vision as a fundamental element in the perception of space drives this effort to use flow features as the sole cues for robot mobility. Real-time estimates of image flow and flow divergence provide the robot's sense of space. The robot steers down a conceptual corridor, comparing left and right peripheral flows. Large central flow divergence warns the robot of impending collisions at "dead ends. " When this occurs, the robot turns around and resumes wandering. Behavior is generated by … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
47
0
1

Year Published

1996
1996
2017
2017

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 88 publications
(48 citation statements)
references
References 21 publications
0
47
0
1
Order By: Relevance
“…One can note that rotation compensation was already used with fish-eye camera in order to have a more direct link between flow and depth. Another work (Coombs et al, 1998) also demonstrated that basic obstacle avoidance could be achieved in cluttered environments such as a closed room.…”
Section: Monocular Vision Based Sense and Avoidmentioning
confidence: 99%
“…One can note that rotation compensation was already used with fish-eye camera in order to have a more direct link between flow and depth. Another work (Coombs et al, 1998) also demonstrated that basic obstacle avoidance could be achieved in cluttered environments such as a closed room.…”
Section: Monocular Vision Based Sense and Avoidmentioning
confidence: 99%
“…This method is invariant under many assumptions inherently associated with appearance-based methods. Although the qualitative use of optical-flow has had good success for navigation through techniques such as balancing optical-flow fields [15], we focus on those techniques which are beneficial toward the task of image classification. The idea of ground-plane detection using optical-flow is not new [16] [17].…”
Section: Related Workmentioning
confidence: 99%
“…Traditional methods of computing TTC [1,9] require computing the divergence of the estimated optical flow, which is not only computationally intensive but, more importantly, requires a significant amount of texture in the scene. To overcome these problems, Horn et al [14] have recently described a direct method to determine the time-to-collision using image brightness derivatives (temporal and spatial) without any calibration, tracking, or optical flow estimation.…”
Section: Time-to-collisionmentioning
confidence: 99%