Summary: Autonomous vehicles will have to coordinate their behavior with human road users such as drivers and pedestrians. The majority of recently proposed solutions for autonomous vehicle-to-human communication consist of introducing additional visual cues (such as lights, text and pictograms) on either the car's exterior or as projections on the road. We argue that potential shortcomings in the visibility (due to light conditions, placement on the vehicle) and immediate understandability (learned, directive) of many of these cues make them alone insufficient in mediating multi-party interactions in the busy intersections of day-to-day traffic. Our observations of real-world human road user behavior in urban intersections indicate that movement in context is a central method of communication for coordination among drivers and pedestrians. The observed movement patterns gain meaning when seen within the context of road geometry, current road activity, and culture. While all movement communicates the intention of the driver, we highlight the use of movement as gesture, done for the specific purpose of communicating to other road users and give examples of how these influence traffic interactions. An awareness and understanding of the effect and importance of movement gestures in day-to-day traffic interactions is needed for developers of autonomous vehicles to design forms of human-vehicle communication that are effective and scalable in multi-party interactions.
As part of our research on multimodal analysis and visualization of activity dynamics, we are exploring the integration of data produced by a variety of sensor technologies within ChronoViz, a tool aimed at supporting the simultaneous visualization of multiple streams of time series data. This paper reports on the integration of a mobile eye-tracking system with data streams collected from HD video cameras, microphones, digital pens, and simulation environments. We focus on the challenging environment of the commercial airline flight deck, analyzing the use of mobile eye tracking systems in aviation human factors and reporting on techniques and methods that can be applied in this and other domains in order to successfully collect, analyze and visualize eye-tracking data in combination with the array of data types supported by ChronoViz.
In this paper, we describe an integrative approach to understanding flight crew activity. Our approach combines contemporary innovations in cognitive science theory with a new suite of methods for measuring, analyzing, and visualizing the activities of commercial airline flight crews in interaction with the complex automated systems found on the modern flight deck. Our unit of analysis is the multiparty, multimodal activity system. We installed a variety of recording devices in high-fidelity flight simulators to produce rich, multistream time-series data sets. The complexity of such data sets and the need for manual coding of high-level events make large-scale analysis prohibitively expensive. We break through this analysis bottleneck by using our newly developed integrated software system called ChronoViz, which supports visualization and analysis of multiple sources of time-coded data, including multiple sources of high-definition video, simulation data, transcript data, paper notes, and eye gaze data. Four examples of flight crew activity serve to illustrate the methods, the theory, and the kinds of findings that are now possible in the study of flight crew interaction with flight deck automation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.