2022
DOI: 10.3390/app122010445
|View full text |Cite
|
Sign up to set email alerts
|

Binary Dense SIFT Flow Based Position-Information Added Two-Stream CNN for Pedestrian Action Recognition

Abstract: Pedestrian behavior recognition in the driving environment is an important technology to prevent pedestrian accidents by predicting the next movement. It is necessary to recognize current pedestrian behavior to predict future pedestrian behavior. However, many studies have recognized human visible characteristics such as face, body parts or clothes, but few have recognized pedestrian behavior. It is challenging to recognize pedestrian behavior in the driving environment due to the changes in the camera field o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 49 publications
0
1
0
Order By: Relevance
“…It has a wide range of applications in real life. For example, human action recognition can be used for home monitoring to monitor the behavioral activities of the elderly and to detect dangerous actions such as falls in a timely manner [1], and it can help an automatic navigation system analyze and predict the action of pedestrians [2]. Commonly used inputs for human action recognition algorithms include RGB images and videos [3], skeleton [4], depth [5], point-cloud [6], and so on.…”
Section: Introductionmentioning
confidence: 99%
“…It has a wide range of applications in real life. For example, human action recognition can be used for home monitoring to monitor the behavioral activities of the elderly and to detect dangerous actions such as falls in a timely manner [1], and it can help an automatic navigation system analyze and predict the action of pedestrians [2]. Commonly used inputs for human action recognition algorithms include RGB images and videos [3], skeleton [4], depth [5], point-cloud [6], and so on.…”
Section: Introductionmentioning
confidence: 99%