“…Finally, only 604 in 1546 frames include two lane-markings in our dataset. The best performance on our dataset among all six is [15], which failed to reach the 90% correct warning rate. The main reason is that these algorithms need two lane-markings while the angles between the two detected lane-marking lines might change drastically with the deviation angles of the camera.…”
Section: Resultsmentioning
confidence: 95%
“…A lane departure identification method used three lane-related parameters, including the Euclidean distances between every two points of the Hough origin H o , the midpoints mp 1 and mp 2 of the identified left and right lane-markings to identify the state of departure [11][12][13]. Besides, algorithms judging the (ρ, θ) patterns or just one of the detected left, and right lane-markings determined the left or right lane departure situation [14][15][16][17][18][19][20][21][22][23][24]. The recent study conducted by Lin et al determines lane departure also by the information of the detected lane-markings only, and it uses a state machine to recognize the "left," "right," and "normal" status, which can reduce the false alarms when the lane-marking is blocked by obstacles [3].…”
Vision-based Lane departure warning system (LDWS) has been widely used in modern vehicles to improve drivability and safety. In this paper, a novel LDWS with precise positioning is proposed. Calibration strategy is first presented through a 3D camera imaging model with only three parallel and equally spaced lines, where the three angles of rotation for the transformation from the camera coordinate system to the world coordinate system are deduced. Then camera height is calculated compared to the previous works using a measured one with potential errors. A criterion for lane departure warning with only one of the two lane-markings is proposed to estimate both yaw angle and distance between the lane-markings and the vehicle. Experiments show that calibration strategy can be easily set up and achieve an average of 98.95% accuracy on the lane departure assessment.
“…Finally, only 604 in 1546 frames include two lane-markings in our dataset. The best performance on our dataset among all six is [15], which failed to reach the 90% correct warning rate. The main reason is that these algorithms need two lane-markings while the angles between the two detected lane-marking lines might change drastically with the deviation angles of the camera.…”
Section: Resultsmentioning
confidence: 95%
“…A lane departure identification method used three lane-related parameters, including the Euclidean distances between every two points of the Hough origin H o , the midpoints mp 1 and mp 2 of the identified left and right lane-markings to identify the state of departure [11][12][13]. Besides, algorithms judging the (ρ, θ) patterns or just one of the detected left, and right lane-markings determined the left or right lane departure situation [14][15][16][17][18][19][20][21][22][23][24]. The recent study conducted by Lin et al determines lane departure also by the information of the detected lane-markings only, and it uses a state machine to recognize the "left," "right," and "normal" status, which can reduce the false alarms when the lane-marking is blocked by obstacles [3].…”
Vision-based Lane departure warning system (LDWS) has been widely used in modern vehicles to improve drivability and safety. In this paper, a novel LDWS with precise positioning is proposed. Calibration strategy is first presented through a 3D camera imaging model with only three parallel and equally spaced lines, where the three angles of rotation for the transformation from the camera coordinate system to the world coordinate system are deduced. Then camera height is calculated compared to the previous works using a measured one with potential errors. A criterion for lane departure warning with only one of the two lane-markings is proposed to estimate both yaw angle and distance between the lane-markings and the vehicle. Experiments show that calibration strategy can be easily set up and achieve an average of 98.95% accuracy on the lane departure assessment.
“…Consider the image is shown in Figure .2, with set sample of detected edge point co-ordinates as (xi, yj) = (6,5), (7,6), (7,16), (8,15), (9,14), (10,13), (13,11), (14,10), (15,9), (16,8) and (17,7).…”
Section: H Example For Dynamic Origin Technique (Dot)mentioning
confidence: 99%
“…In another recommendation, 40% of image from bottom has been considered as ROI [1]. Another proposal has introduced an intelligent trapezoidal ROI [8]. The ROI selection, line detection procedure has been applied independently on the first frame and carried forward to the succeeding video frames [11].…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.