2019
DOI: 10.1016/j.heliyon.2019.e02169
|View full text |Cite
|
Sign up to set email alerts
|

Vision-based lane departure warning framework

Abstract: Collisions arising from lane departures have contributed to traffic accidents causing millions of injuries and tens of thousands of casualties per year worldwide. Many related studies had shown that single vehicle lane departure crashes accounted largely in road traffic deaths that results from drifting out of the roadway. Hence, automotive safety has becoming a concern for the road users as most of the road casualties occurred due to driver's fallacious judgement of vehicle path. This paper proposes a vision-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 29 publications
0
6
0
Order By: Relevance
“…However, the detection of road lanes is missed in the dark time because the street lights on the road are required for the visibility of road lanes in the dark. A vision-based approach was proposed by [18] for lane detection against daytime and nighttime, respectively. For lane detection, the flow of traffic and situations of the road surface against urban and highways are assessed.…”
Section: Related Workmentioning
confidence: 99%
“…However, the detection of road lanes is missed in the dark time because the street lights on the road are required for the visibility of road lanes in the dark. A vision-based approach was proposed by [18] for lane detection against daytime and nighttime, respectively. For lane detection, the flow of traffic and situations of the road surface against urban and highways are assessed.…”
Section: Related Workmentioning
confidence: 99%
“…LaLi+VPR achieves an average accuracy of 93.88%, but it is still 5.095% lower than TSA‐LNet. Vision‐based lane detection (VBLD) (Em et al., 2019) is another traditional method based on Hough transform. VBLD involves four stages, including color space transformation, region of interest extraction, lane segmentation, and lane detection by adopting Hough transform.…”
Section: Resultsmentioning
confidence: 99%
“…The recommendation for future work is to reduce the dependency of vehicle state information like yaw acceleration response for the application of LDWS. A vehicle state information alone is unable to overcome the limitations of environmental factors, such as rainy conditions, drastic change of illuminations conditions like night-times, nonstructural road, and clutters/shadow noise on road surface [27]. Alternatively, vision data can be fused with a local sensor like the yaw rate sensor for enhancing the LDWS reliability, particularly in urban traffic situations with a variety of road signs printed on the road background.…”
Section: Resultsmentioning
confidence: 99%