Proceedings of Computer Graphics International 2018 2018
DOI: 10.1145/3208159.3208192
|View full text |Cite
|
Sign up to set email alerts
|

Understanding Human-Object Interaction in RGB-D videos for Human Robot Interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…Modified A* algorithm with improvements made on rectangular symmetry reduction and jump point search for simultaneous localization and mapping of mobile robots has been used for improving the navigational character . Fang et al used RGB_D sensors with depth information for improving the human–robot interaction in recognizing and detecting objects . Zhang et al developed and presented quadratic program method and virtual plane approach for formulating the coordinated dual arm motion with the analytical solution of head motion, which improves the efficiency of the head‐arm model of robot toward human‐like behavior generation…”
Section: Introductionmentioning
confidence: 99%
“…Modified A* algorithm with improvements made on rectangular symmetry reduction and jump point search for simultaneous localization and mapping of mobile robots has been used for improving the navigational character . Fang et al used RGB_D sensors with depth information for improving the human–robot interaction in recognizing and detecting objects . Zhang et al developed and presented quadratic program method and virtual plane approach for formulating the coordinated dual arm motion with the analytical solution of head motion, which improves the efficiency of the head‐arm model of robot toward human‐like behavior generation…”
Section: Introductionmentioning
confidence: 99%
“…Human-Object Interaction (HOI) Recognition is the task of identifying how people interact with the surrounding objects from the visual appearance of the scene and it is of paramount importance to understand the content of an image. It consists of producing a set of human, action, object triplets for the input image, providing a concise representation of the image semantics that can be used in higher-level tasks like Image Captioning [1] or Human-Robot Interaction [2].…”
Section: Introductionmentioning
confidence: 99%