The accurate detection and recognition of traffic lights is important for autonomous vehicle navigation and advanced driver aid systems. In this paper, we present a traffic light recognition algorithm for varying illumination conditions using computer vision and machine learning. More specifically, a convolutional neural network is used to extract and detect features from visual camera images. To improve the recognition accuracy, an on-board GPS sensor is employed to identify the region-of-interest, in the visual image, that contains the traffic light. In addition, a saliency map containing the traffic light location is generated using the normal illumination recognition to assist the recognition under low illumination conditions. The proposed algorithm was evaluated on our data sets acquired in a variety of real world environments and compared with the performance of a baseline traffic signal recognition algorithm. The experimental results demonstrate the high recognition accuracy of the proposed algorithm in varied illumination conditions.
Pedestrian detection is paramount for advanced driver assistance systems (ADAS) and autonomous driving. As a key technology in computer vision, it also finds many other applications, such as security and surveillance etc. Generally, pedestrian detection is conducted for images in visible spectrum, which are not suitable for night time detection. Infrared (IR) or thermal imaging is often adopted for night time due to its capability of capturing the emitted energy from pedestrians. The detection process firstly extracts candidate pedestrians from the captured IR image. Robust feature descriptors are formulated to represent those candidates. A binary classification of the extract features is then performed with trained classifier models. In this paper, an algorithm for pedestrian detection from IR image is proposed, where an adaptive fuzzy C-means clustering and convolutional neural networks are adopted. The adaptive fuzzy C-means clustering is used to segment the IR images and retrieve the candidate pedestrians. The candidate pedestrians are then pruned using human posture characteristics and the second central moments ellipse. The convolutional neural network is used to simultaneously learn relevant features and perform the binary classification. The performance of the proposed algorithm is compared with state-of-the-art algorithms on publicly available data set. A better detection accuracy with reduced computational accuracy is achieved.
In recent years, automated vehicle researches move on to the next stage, that is auto-driving experiments on public roads. Major challenge is how to robustly drive at complicated situations such as narrow or non-featured road. In order to realize practical performance, some static information should be kept on memory such as road topology, building shape, white line, curb, traffic light and so on. Currently, some measurement companies have already begun to prepare map database for automated vehicles. They are able to provide highly-precise 3-D map for robust automated driving. This study focuses on what kind of data should be observed during automated driving with such precise database. In particular, we focus on the accurate localization based on the use of lidar data and precise 3-D map, and propose a feature quantity for scan data based on distribution of clusters. Localization experiment shows that our method can measure surrounding uncertainty and guarantee accurate localization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.