2022
DOI: 10.3390/ijerph19095059
|View full text |Cite
|
Sign up to set email alerts
|

Driver’s Visual Attention Characteristics and Their Emotional Influencing Mechanism under Different Cognitive Tasks

Abstract: The visual attention system is the gateway to the human information processing system, and emotion is an important part of the human perceptual system. In this paper, the driver’s visual attention characteristics and the influences of typical driving emotions on those were explored through analyzing driver’s fixation time and identification accuracy to different visual cognitive tasks during driving. The results showed that: the increasing complexity of the cognitive object led to the improvement of visual ide… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 53 publications
0
1
0
Order By: Relevance
“…The attention mechanism can be seen as a simulation of human attention; that is, humans can pay attention to valuable information, while ignoring useless information. A new attention mechanism architecture, a transformer based on the self-attention Seq2seq [28] model, has demonstrated powerful capabilities in sequential data processing, such as natural language processing [29], audio processing [30], and even computer vision [31]. Unlike a RNN, transformer allows the model to access any part of history without considering the distance, making it more suitable for mastering repeated patterns with long-term dependencies and for preventing overfitting.…”
Section: Introductionmentioning
confidence: 99%
“…The attention mechanism can be seen as a simulation of human attention; that is, humans can pay attention to valuable information, while ignoring useless information. A new attention mechanism architecture, a transformer based on the self-attention Seq2seq [28] model, has demonstrated powerful capabilities in sequential data processing, such as natural language processing [29], audio processing [30], and even computer vision [31]. Unlike a RNN, transformer allows the model to access any part of history without considering the distance, making it more suitable for mastering repeated patterns with long-term dependencies and for preventing overfitting.…”
Section: Introductionmentioning
confidence: 99%