Modelling Human Motion 2020
DOI: 10.1007/978-3-030-46732-6_4
|View full text |Cite
|
Sign up to set email alerts
|

The Visual Perception of Biological Motion in Adults

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 60 publications
0
3
0
Order By: Relevance
“…The previous research [ 1 , 2 , 3 ] highlights that to recognize actions, the human brain analyzes two levels of visual processing: one high level and one low level. The high-level processing deals with semantic processing and is based on global features, and the low-level processing is focused on kinematic features.…”
Section: Related Workmentioning
confidence: 99%
“…The previous research [ 1 , 2 , 3 ] highlights that to recognize actions, the human brain analyzes two levels of visual processing: one high level and one low level. The high-level processing deals with semantic processing and is based on global features, and the low-level processing is focused on kinematic features.…”
Section: Related Workmentioning
confidence: 99%
“…This can be as simple as noticing that another person is looking at something interesting and looking at the same thing, or a shared activity such as collaborating on moving furniture through a doorway. This can be coupled with how humans perceive each other's intentions, often focusing on gauging the other person's motion rather than maintaining eye contact [33]. Human motion has been found to be identifiable even when only a grid of points is shown, not a whole human, and that this motion not only follows reasonably simple rules but that people also use the motion to gauge intentions [33,34].…”
Section: Collaborative Robot Applicationsmentioning
confidence: 99%
“…This can be coupled with how humans perceive each other's intentions, often focusing on gauging the other person's motion rather than maintaining eye contact [33]. Human motion has been found to be identifiable even when only a grid of points is shown, not a whole human, and that this motion not only follows reasonably simple rules but that people also use the motion to gauge intentions [33,34]. An important note is that some studies on this kind of motion identification or intention identification focus on general motion, while others focus on a context-specific motion such as e.g., identification of cyclists' intentions in traffic.…”
Section: Collaborative Robot Applicationsmentioning
confidence: 99%