A new car-following model termed as multiple headway, velocity, and acceleration difference (MHVAD) is proposed to describe the traffic phenomenon, which is a further extension of the existing model of full velocity difference (FVD) and full velocity and acceleration difference (FVAD). Based on the stability analysis, it is shown that the critical value of the sensitivity in the MHVAD model decreases and the stable region is apparently enlarged, compared with the FVD model and other previous models. At the end, the simulation results demonstrate that the dynamic performance of the proposed MHVAD model is better than that of the FVD and FVAD models.
Driving intention prediction is one of the key technologies for the development of advanced assisted driving systems (ADAS), which could greatly reduce traffic accidents caused by lane change and ensure driving safety. In this paper, an advanced predictive method based on Multi-LSTM (Long Short-Term Memory) is proposed to predict lane change intention effectively. First, the training data set and test set based on real road information data set NGSIM (Next Generation SIMulation) are built considering ego vehicle driving state and the influence of surrounding vehicles. Second, the Multi-LSTM-based prediction controller is constructed to learn vehicle behavior characteristics and time series relation of various states in the process of lane change. Then, the influences of prediction model structure change and data structure change on test results are verified. Finally, the verification tests based on HIL (Hardware-in-the-Loop) simulation are constructed. The results show that the proposed prediction model can accurately predict the vehicle lane change intention in highway scenarios and the maximum prediction accuracy can reach 83.75%, which is higher than that of common method SVM (Support Vector Machine).INDEX TERMS Intelligent vehicle, lane change, driving intention prediction, advanced assisted driving systems, multi-LSTM.
Face parsing is an important computer vision task that requires accurate pixel segmentation of facial parts (such as eyes, nose, mouth, etc.), providing a basis for further face analysis, modification, and other applications. In this paper, we introduce a simple, end-to-end face parsing framework: STN-aided iCNN (STN-iCNN), which extends interlinked Convolutional Neural Network (iCNN) by adding a Spatial Transformer Network (STN) between the two isolated stages. The STN-iCNN uses the STN to provide a trainable connection to the original two-stage iCNN pipeline, making end-to-end joint training possible. Moreover, as a by-product, STN also provides more precise cropped parts than the original cropper. Due to the two advantages, our approach significantly improves the accuracy of the original model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.