a) Participant crossing the virtual street (b) Top view diagram (c) First person perspective view Fig. 1: Visualization from various perspectives of the pedestrian crossing scenario in virtual reality. Fig. 1a visualizes the crossing experience. Fig. 1b shows a top down view of the intersection where the red dot indicates starting position and the blue dashed line indicates the line of sight threshold where the vehicle becomes visible. Fig. 1c provides the first person view of the subject in virtual reality based on the orientation of their head.Abstract-We use an immersive virtual reality environment to explore the intricate social cues that underlie non-verbal communication involved in a pedestrian's crossing decision. We "hack" non-verbal communication between pedestrian and vehicle by engineering a set of 15 vehicle trajectories, some of which follow social conventions and some that break them. By subverting social expectations of vehicle behavior we show that pedestrians may use vehicle kinematics to infer social intentions and not merely as the state of a moving object. We investigate human behavior in this virtual world by conducting a study of 22 subjects, with each subject experiencing and responding to each of the trajectories by moving their body, legs, arms, and head in both the physical and the virtual world. Both quantitative and qualitative responses are collected and analyzed, showing that, in fact, social cues can be engineered through vehicle trajectory manipulation. In addition, we demonstrate that immersive virtual worlds which allow the pedestrian to move around freely, provide a powerful way to understand both the mechanisms of human perception and the social signaling involved in pedestrian-vehicle interaction.
When asked, a majority of people believe that, as pedestrians, they make eye contact with the driver of an approaching vehicle when making their crossing decisions. This work presents evidence that this widely held belief is false. We do so by showing that, in majority of cases where conflict is possible, pedestrians begin crossing long before they are able to see the driver through the windshield. In other words, we are able to circumvent the very difficult question of whether pedestrians choose to make eye contact with drivers, by showing that whether they think they do or not, they can't. Specifically, we show that over 90% of people in representative lighting conditions cannot determine the gaze of the driver at 15m and see the driver at all at 30m. This means that, for example, that given the common city speed limit of 25mph, more than 99% of pedestrians would have begun crossing before being able to see either the driver or the driver's gaze. In other words, from the perspective of the pedestrian, in most situations involving an approaching vehicle, the crossing decision is made by the pedestrian solely based on the kinematics of the vehicle without needing to determine that eye contact was made by explicitly detecting the eyes of the driver.
a) Example video frame of the forward roadway. (b) Pedestrian and vehicle detection via YOLO v3.Fig. 1. Example video frame and detection of vehicles and pedestrians from the MIT-AVT naturalistic driving dataset [4].Abstract-Humans, as both pedestrians and drivers, generally skillfully navigate traffic intersections. Despite the uncertainty, danger, and the non-verbal nature of communication commonly found in these interactions, there are surprisingly few collisions considering the total number of interactions. As the role of automation technology in vehicles grows, it becomes increasingly critical to understand the relationship between pedestrian and driver behavior: how pedestrians perceive the actions of a vehicle/driver and how pedestrians make crossing decisions. The relationship between time-to-arrival (TTA) and pedestrian gap acceptance (i.e., whether a pedestrian chooses to cross under a given window of time to cross) has been extensively investigated. However, the dynamic nature of vehicle trajectories in the context of non-verbal communication has not been systematically explored. Our work provides evidence that trajectory dynamics, such as changes in TTA, can be powerful signals in the non-verbal communication between drivers and pedestrians. Moreover, we investigate these effects in both simulated and real-world datasets, both larger than have previously been considered in literature to the best of our knowledge.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.