2019
DOI: 10.1016/j.bandc.2019.103583
|View full text |Cite
|
Sign up to set email alerts
|

Hemodynamic responses to visual cues during attentive listening in autonomous versus manual simulated driving: A pilot study

Abstract: Emerging automation technologies could have a strong impact on the allocation of drivers' attentional resources. The first objective of this pilot study is to investigate the hemodynamic responses evoked to relevant visual stimuli in manual and autonomous driving. The second aim is to examine how the inclusion of a secondary task (attentive listening to a broadcast) modulates these hemodynamic responses in both driving situations. Frontal, temporo-parietal and occipital activations were recorded using a functi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(16 citation statements)
references
References 33 publications
0
16
0
Order By: Relevance
“…The majority of studies included here were conducted in a simulator setting, with only nine occurring in an on-road environment (Harada et al, 2007;Shimizu T. et al, 2011;Yoshino et al, 2013a,b;Inoue et al, 2014;Orino et al, 2015Orino et al, , 2017Liu et al, 2017;Le et al, 2018). However, as described above, the quality of the simulators (e.g., fidelity of the visual environment, amount of visual field encompassed, realism of the simulator to a real automobile) varied between low fidelity desktop computer setups (Shang et al, 2007;Li et al, 2009Li et al, , 2018Tomioka et al, 2009;Liu, 2014;Khan and Hong, 2015;Pradhan et al, 2015;Unni et al, 2015;Ahn et al, 2016;Horrey et al, 2017;Nguyen et al, 2017;Xu L. et al, 2017;Hidalgo-Munoz et al, 2019;Khan et al, 2019;Lin et al, 2019;Tanveer et al, 2019) and more immersive simulated environments (Nakano et al, 2013;Oka et al, 2015;FakhrHosseini et al, 2015;Foy et al, 2016;Foy and Chapman, 2018;Huve et al, 2018Huve et al, , 2019Sturman and Wiggins, 2019;Yamamoto et al, 2019) including large-scale simulators that comprise a real car mock-up along with a wide field of vision (Tsunashima and Yanagisawa, 2009;Shimizu et al, 2009;Orino et al, 2015;…”
Section: Experimental Environmentmentioning
confidence: 99%
See 1 more Smart Citation
“…The majority of studies included here were conducted in a simulator setting, with only nine occurring in an on-road environment (Harada et al, 2007;Shimizu T. et al, 2011;Yoshino et al, 2013a,b;Inoue et al, 2014;Orino et al, 2015Orino et al, , 2017Liu et al, 2017;Le et al, 2018). However, as described above, the quality of the simulators (e.g., fidelity of the visual environment, amount of visual field encompassed, realism of the simulator to a real automobile) varied between low fidelity desktop computer setups (Shang et al, 2007;Li et al, 2009Li et al, , 2018Tomioka et al, 2009;Liu, 2014;Khan and Hong, 2015;Pradhan et al, 2015;Unni et al, 2015;Ahn et al, 2016;Horrey et al, 2017;Nguyen et al, 2017;Xu L. et al, 2017;Hidalgo-Munoz et al, 2019;Khan et al, 2019;Lin et al, 2019;Tanveer et al, 2019) and more immersive simulated environments (Nakano et al, 2013;Oka et al, 2015;FakhrHosseini et al, 2015;Foy et al, 2016;Foy and Chapman, 2018;Huve et al, 2018Huve et al, , 2019Sturman and Wiggins, 2019;Yamamoto et al, 2019) including large-scale simulators that comprise a real car mock-up along with a wide field of vision (Tsunashima and Yanagisawa, 2009;Shimizu et al, 2009;Orino et al, 2015;…”
Section: Experimental Environmentmentioning
confidence: 99%
“…However, as described above, the quality of the simulators (e.g., fidelity of the visual environment, amount of visual field encompassed, realism of the simulator to a real automobile) varied between low fidelity desktop computer setups (Shang et al, 2007;Li et al, 2009Li et al, , 2018Tomioka et al, 2009;Liu, 2014;Khan and Hong, 2015;Pradhan et al, 2015;Unni et al, 2015;Ahn et al, 2016;Horrey et al, 2017;Nguyen et al, 2017;Xu L. et al, 2017;Hidalgo-Munoz et al, 2019;Khan et al, 2019;Lin et al, 2019;Tanveer et al, 2019) and more immersive simulated environments (Nakano et al, 2013;Oka et al, 2015;FakhrHosseini et al, 2015;Foy et al, 2016;Foy and Chapman, 2018;Huve et al, 2018Huve et al, , 2019Sturman and Wiggins, 2019;Yamamoto et al, 2019) including large-scale simulators that comprise a real car mock-up along with a wide field of vision (Tsunashima and Yanagisawa, 2009;Shimizu et al, 2009;Orino et al, 2015;Nosrati et al, 2016;Sibi et al, 2016Sibi et al, , 2017Balters et al, 2017;Unni et al, 2017;Bruno et al, 2018;Chuang et al, 2018;Ihme et al, 2018;Zhu et al, 2019). To minimize motion artifacts, participants in eight studies were specifically instructed to ...…”
Section: Experimental Environmentmentioning
confidence: 99%
“…The approach of studying evoked hemodynamic responses can therefore be considered to be a rapid eventrelated study, in which the inter-stimuli intervals may be shorter than the elicited response [51]. As segments in the study were short, the mean signal value of Δ(HbO) was the most representative parameter [52].…”
Section: Fnirs Data Processingmentioning
confidence: 99%
“…As segments in the study were short, the mean signal value of ∆(HbO) was the most representative parameter [52]. Table 2 depicts channels discarded by region and participant, due to poor SNR, by the prune channel function in Homer2.…”
Section: Fnirs Data Processingmentioning
confidence: 99%
“…The results showed that it was possible to differentiate the driver's activity and emotional state with different physiological measurements: cardiac variability to detect emotion, breathing amplitude to distinguish the type of activity of the driver in a state of sadness, and the breathing period to differentiate activity in a neutral emotional state [9]. This paper follows the conclusions of the previous studies, starting with the objective to develop a global diagnostic of the driver state based on the three types of indicators (i.e., gaze, postural, and physiological indicators).…”
Section: Introductionmentioning
confidence: 99%