Disabled pedestrians are among the most vulnerable groups in road traffic. Using technology to assist this vulnerable group could be instrumental in reducing the mobility challenges they face daily. On the one hand, the automotive industry is focusing its efforts on car automation. On the other hand, in recent years, assistive technology has been promoted as a tool for consolidating the functional independence of people with disabilities. However, the success of these technologies depends on how well they help self-driving cars interact with disabled pedestrians. This paper proposes an architecture to facilitate interaction between disabled pedestrians and self-driving cars based on deep learning and 802.11p wireless technology. Through the application of assistive technology, we can locate the pedestrian with a disability within the road traffic ecosystem, and we define a set of functionalities for the identification of hand gestures of people with disabilities. These functions enable pedestrians with disabilities to express their intentions, improving their confidence and safety level in tasks within the road ecosystem, such as crossing the street.