Artificial Intelligence (AI) in the automotive industry allows car manufacturers to produce intelligent and autonomous vehicles through the integration of AI-powered Advanced Driver Assistance Systems (ADAS) and/or Automated Driving Systems (ADS) such as the Traffic Sign Recognition (TSR) system. Existing TSR solutions focus on some categories of signs they recognise. For this reason, a TSR approach encompassing more road sign categories like Warning, Regulatory, Obligatory, and Priority signs is proposed to build an intelligent and real-time system able to analyse, detect, and classify traffic signs into their correct categories. The proposed approach is based on an overview of different Traffic Sign Detection (TSD) and Traffic Sign Classification (TSC) methods, aiming to choose the best ones in terms of accuracy and processing time. Hence, the proposed methodology combines the Haar cascade technique with a deep CNN model classifier. The developed TSC model is trained on the GTSRB dataset and then tested on various categories of road signs. The achieved testing accuracy rate reaches 98.56%. In order to improve the classification performance, we propose a new attention-based deep convolutional neural network. The achieved results are better than those existing in other traffic sign classification studies since the obtained testing accuracy and F1-measure rates achieve, respectively, 99.91% and 99%. The developed TSR system is evaluated and validated on a Raspberry Pi 4 board. Experimental results confirm the reliable performance of the suggested approach.
nowadays, the new technologies of image processing and Human Machine Interaction (HMI) are mostly linked to the scientific domains of research (medicine, transport, etc). Indeed, the emergence of the tactile interfaces as well as the gestural interfaces allows increasing the physical world which surrounds us with the digital information. It also allows us to use natural hand gestures to interact with this information which is a digital image. In this paper, we describe the basic concepts of the images processing as well as the HMI foundations. Then, we identify the gestural human interaction as well as its relation with the imaging.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.