2017
DOI: 10.1007/s10514-017-9674-5
|View full text |Cite
|
Sign up to set email alerts
|

Efficient behavior learning in human–robot collaboration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(16 citation statements)
references
References 18 publications
0
16
0
Order By: Relevance
“…For this demonstrations, a dynamical system was used to capture the position and orientation of the human wrist, which was then mirrored to the robot motion. Another research development that uses both training datasets and execution behavior as sources for teaching the robot is presented in [104]. The robot used in the study could start acting autonomously based on a Markov Decision Process, where some reward functions would dictate the confidence on the task that the robot was about to perform.…”
Section: Robot Learningmentioning
confidence: 99%
“…For this demonstrations, a dynamical system was used to capture the position and orientation of the human wrist, which was then mirrored to the robot motion. Another research development that uses both training datasets and execution behavior as sources for teaching the robot is presented in [104]. The robot used in the study could start acting autonomously based on a Markov Decision Process, where some reward functions would dictate the confidence on the task that the robot was about to perform.…”
Section: Robot Learningmentioning
confidence: 99%
“…For example, in [20], the user is assembling a system of pipes; the robot has to understand in which step the user is currently at and grabs the pipe related to that step of the assembly. In [32], the robot is helping the user assembling a box, through the understanding of what it is doing. In other cases, instead, the robot predicts the next step of the assembly process and prepare the scenario for the user to perform it later.…”
Section: Human-robot Collaboration Tasksmentioning
confidence: 99%
“…A main reason for this is that vision allows the system to potentially get information from different elements of the environment at the same time. For instance, [32] uses vision to acquire information related to the robot and the users in the experiments. In fact, through vision the robot becomes aware of the user's position in the environment and physical features.…”
Section: Robot's Sensing Modalitiesmentioning
confidence: 99%
“…The procedure of the shared task with its designated roles was then modeled by adopting the Relational Action Processes (RAP) established by Toussaint et al (2016) . Through the usage of the relational Markov Decision Process, which is commonly implemented for decision processes of agents performing within an environment, the model enabled the simultaneous operation of several actions, either sequential or asynchronously, depending on the current requirement ( Munzer et al, 2018 ). Additional information from two agents monitoring the states of the pin-back button press tracked the current progress of the task and the speed at which the participant conducted it, were used to enable the robot arm to adapt its movement speed to the working pace of the human partner.…”
Section: Experimental Studymentioning
confidence: 99%