Abstract-Collision avoidance is one of the most difficult and challenging automatic driving operations in the domain of intelligent vehicles. In emergency situations, human drivers are more likely to brake than to steer, although the optimal maneuver would, more frequently, be steering alone. This statement suggests the use of automatic steering as a promising solution to avoid accidents in the future. The objective of this paper is to provide a collision avoidance system (CAS) for autonomous vehicles, focusing on pedestrian collision avoidance. The detection component involves a stereo-vision-based pedestrian detection system that provides suitable measurements of the time to collision. The collision avoidance maneuver is performed using fuzzy controllers for the actuators that mimic human behavior and reactions, along with a high-precision Global Positioning System (GPS), which provides the information needed for the autonomous navigation. The proposed system is evaluated in two steps. First, drivers' behavior and sensor accuracy are studied in experiments carried out by manual driving. This study will be used to define the parameters of the second step, in which automatic pedestrian collision avoidance is carried out at speeds of up to 30 km/h. The performed field tests provided encouraging results and proved the viability of the proposed approach.
Abstract-Over the past few years, advanced driver-assistance systems (ADASs) have become a key element in the research and development of intelligent transportation systems (ITSs) and particularly of intelligent vehicles. Many of these systems require accurate global localization information, which has been traditionally performed by the Global Positioning System (GPS), despite its well-known failings, particularly in urban environments. Different solutions have been attempted to bridge the gaps of GPS positioning errors, but they usually require additional expensive sensors. Vision-based algorithms have proved to be capable of tracking the position of a vehicle over long distances using only a sequence of images as input and with no prior knowledge of the environment. This paper describes a full solution to the estimation of the global position of a vehicle in a digital road map by means of visual information alone. Our solution is based on a stereo platform used to estimate the motion trajectory of the ego vehicle and a map-matching algorithm, which will correct the cumulative errors of the vision-based motion information and estimate the global position of the vehicle in a digital road map. We demonstrate our system in large-scale urban experiments reaching high accuracy in the estimation of the global position and allowing for longer GPS blackouts due to both the high accuracy of our visual odometry estimation and the correction of the cumulative error of the mapmatching algorithm. Typically, challenging situations in urban environments such as nonstatic objects or illumination exceeding the dynamic range of the cameras are shown and discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.