The vision‐based lane detection is an important component of advanced driver assistance systems and it is essential for lane departure warning, lane keeping, and vehicle localisation. However, it is a challenging problem to improve the robustness of multi‐lane detection due to factors, such as perspective effect, possible low visibility of lanes, and partial occlusions. To deal with these issues, the authors propose an improved lane hypothesis generation method using a reliable binary blob analysis. Most existing top‐view based methods focused on the lane model fitting, but they neglected the reliability of hypothesis generation and the effectiveness in challenging conditions. To cope with these shortcomings, the authors carried out vanishing point detection and inverse perspective mapping to remove the perspective effect from the road images. Then two‐stage binary blob filtering and blob verification techniques using classification are introduced to improve the robustness of lane hypothesis generation for lane detection. The experimental results show that the average detection accuracy on a new challenging multi‐lane dataset is 97.7%. The performance of the proposed method outperforms that of the state‐of‐the‐art method by 1.6% in detection accuracy on the Caltech lane benchmark dataset.
In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality.
The idea of safe and smart vehicles has been thoroughly researched over the past decades to ensure drivers' safety from possibly dangerous situations. This paper presents a brief review of different applications of image processing and computer vision techniques in smart vehicles. To detect other on-road vehicles, researchers have approached the problem from various angles; with solutions ranging from active sensors like radar to passive sensors like cameras. Recently, researchers are working to create a panoramic 360 degree view of the vehicle's environment by merging different images from sides, rear and front of the car using passive sensors. There has also been work on constructing high resolution images from low cost, low resolution cameras, in order to reduce final cost of the system. In this paper, we have presented a new algorithm for mono-camera based vehicle detection systems, by incorporating different low level (edges) and high level features (Bag-offeatures). To extract edge information flawlessly, we presented a new edge detection method, namely Difference of BiGaussian (DoBG). Experimental results show average 98.5% recognition rates, which is one of the best results achieved so far.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.