2006 IEEE Intelligent Vehicles Symposium
DOI: 10.1109/ivs.2006.1689603
|View full text |Cite
|
Sign up to set email alerts
|

Increased Accuracy Stereo Approach for 3D Lane Detection

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
15
0

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(15 citation statements)
references
References 9 publications
0
15
0
Order By: Relevance
“…For instance, several works address sub-pixel accuracy lane markings models [3][4][5] using calibration information. Others use image alignment between the stereo pair to detect volumetric objects on the road plane [6,7], or to enhance lane marking detection [8].…”
Section: Introductionmentioning
confidence: 99%
“…For instance, several works address sub-pixel accuracy lane markings models [3][4][5] using calibration information. Others use image alignment between the stereo pair to detect volumetric objects on the road plane [6,7], or to enhance lane marking detection [8].…”
Section: Introductionmentioning
confidence: 99%
“…1: first an Inverse Perspective Mapping (IPM) transformation [6], [7] is applied to the frame grabbed by the camera, then a low-level filtering is performed to highlight the Dark-LightDark [8] (DLD) patterns of the image; the resulting points are grouped together, and clusters are finally approximated by continuous piecewise-linear functions. Once the low-level processing is over, the resulting segment lists are compared to existing lane markings in a tracking stage (Fig.…”
Section: Algorithmmentioning
confidence: 99%
“…The integration of a vision system with active sensors is useful because the two technologies are complementary, and in some ways redundant: while the problem of lane detection is mainly addressed using vision [2] [3] [4], obstacle detection can get many benefits from an intelligent fusion with scan data. This topic will be discussed in section III.…”
Section: Introductionmentioning
confidence: 99%