2022
DOI: 10.3390/s22114279
|View full text |Cite
|
Sign up to set email alerts
|

Prediction-Based Human-Robot Collaboration in Assembly Tasks Using a Learning from Demonstration Model

Abstract: Most robots are programmed to carry out specific tasks routinely with minor variations. However, more and more applications from SMEs require robots work alongside their counterpart human workers. To smooth the collaboration task flow and improve the collaboration efficiency, a better way is to formulate the robot to surmise what kind of assistance a human coworker needs and naturally take the right action at the right time. This paper proposes a prediction-based human-robot collaboration model for assembly sc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(4 citation statements)
references
References 51 publications
0
4
0
Order By: Relevance
“…The ability to identify hand movement intention is prominent for robots on collaborated assembly line in HRI scenario. Zhang et al (2022) proposed a method to predict human hand motion during an assembly task to improve collaboration flow and efficiency. The design utilizes an RGB camera mounted over the robot, facing downwards at the working area on the table.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The ability to identify hand movement intention is prominent for robots on collaborated assembly line in HRI scenario. Zhang et al (2022) proposed a method to predict human hand motion during an assembly task to improve collaboration flow and efficiency. The design utilizes an RGB camera mounted over the robot, facing downwards at the working area on the table.…”
Section: Resultsmentioning
confidence: 99%
“…When analyzing the data extracted from the literature and presented in Table 2 , it becomes clear that the most popular sensor used in designs is the RGB camera ( Fang et al, 2017 ; Li et al, 2018 ; Gardner et al, 2020 ; Jaouedi et al, 2020 ; Mohammadi Amin et al, 2020 ; Chiu et al, 2021 ; Ding and Zheng, 2022 ; Poulose et al, 2022 ; Tsitos et al, 2022 ; Zhang et al, 2022 ). This is likely due to the widespread use of image recognition applications in recent years, as well as the relative affordability and accessibility of RGB cameras in work environments.…”
Section: Discussionmentioning
confidence: 99%
“…Human activity recognition has recently caught the attention of the computer vision community since it drives real-world applications that make our life better and safer, such as human-computer interaction in robotics and gaming, video surveillance, and social activity recognition [1]. For example, new robotic applications try to predict human activity patterns in order to let the robot early infer when a speci ic collaborative operation will be requested by the human [2,3]. In video surveillance, human activity classi ication can be integrated with probabilistic prediction models, in order to infer the ongoing activity [4].…”
Section: Introductionmentioning
confidence: 99%
“…Kim et al [12] trained a position-force mapping model for a 3-mm square area using a clustering algorithm and achieved a model error of less than 0.35 mm. A peg-hole data model uses machine learning methods to train contact state data offline and predict the current state online [13][14][15]. This model is updated in real time for online learning through data input from the current contact state [16][17][18][19][20].…”
Section: Introductionmentioning
confidence: 99%