2019 IEEE Intelligent Transportation Systems Conference (ITSC) 2019
DOI: 10.1109/itsc.2019.8916929
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Spatial-Temporal Graph for Skeleton-based Driver Action Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…Drawing inspiration from the robust methods in the paper "Robust Construction of Spatial-Temporal Scene Graph Considering Perception Failures for Autonomous Driving" [39], the metrics utilized in our quantitative evaluation consider potential perception failures. These evaluation metrics offer a compelling means to assess the effectiveness and accuracy of our scenario-based segmentation model, thus ensuring the model's robustness in autonomous driving contexts.…”
Section: Quantitative Evaluationmentioning
confidence: 99%
“…Drawing inspiration from the robust methods in the paper "Robust Construction of Spatial-Temporal Scene Graph Considering Perception Failures for Autonomous Driving" [39], the metrics utilized in our quantitative evaluation consider potential perception failures. These evaluation metrics offer a compelling means to assess the effectiveness and accuracy of our scenario-based segmentation model, thus ensuring the model's robustness in autonomous driving contexts.…”
Section: Quantitative Evaluationmentioning
confidence: 99%
“…Check for details. StateFarm [ 192 ] Safe Drive, Text-Right, Talk on the Phone-Right, Text-Left, Talk on the Phone-Left, Operate the Radio, Drink, Reach Behind, Hair and Makeup, Talk to Passenger DHG-14/28 [ 70 ] Grab, Tap, Expand, Pinch, Rotate Clockwise, Rotatr Couter Clockwise, Swipe Right, Swipe Left, Swipe Up, Swipe Down, Swipe X, Swipe V, Swipe +, Shake Volleyball [ 115 ] Wait, Set, Dig, Fall, Spike, Block, Jump, Move, Stand SYSU [ 62 , 69 , 72 , 75 , 85 , 90 , 97 , 135 , 136 ] Drink, Pour, Call Phone, Play Phone, Wear Backpacks, Pack Backpacks, Sit Chair, Move Chair, Take Out Wallet, Take From Wallet, Mope, Sweep SHREC’17 [ 70 ] Grab, Tap, Expand, Pinch, Rotate Clockwise, Rotatr Couter Clockwise, Swipe Right, Swipe Left, Swipe Up, Swipe Down, Swipe X, Swipe V, Swipe + and Shake Kinetics [ 13 , 14 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 57 , 60 , …”
Section: Table A1mentioning
confidence: 99%
“…Until recently, the majority of driving observation frameworks comprised a manual feature extraction step followed by a classification module (for a thorough overview see [21]). The constructed feature vectors are often derived from hand-and body-pose [2], [3], [6], [7], [38], [39], facial expressions and eye-based input [40], [41], and head pose [42], [43], but also foot dynamics [44], detected objects [6], and physiological signals [45] have been considered. Classification approaches are fairly similar to the ones used in standard video classification, with LSTMs [3], [4], SVMs [2], [46], random forests [47] or HMMs [4], and graph neural networks [7], [48] being popular choices.…”
Section: Related Work a Driver Action Recognitionmentioning
confidence: 99%