2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2013
DOI: 10.1109/hri.2013.6483499
|View full text |Cite
|
Sign up to set email alerts
|

Human-robot cross-training: Computational formulation, modeling and evaluation of a human team training strategy

Abstract: Human-robot cross-training: Computational formulation, modeling and evaluation of a human team training strategyThe MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. CitationNikolaidis, Stefanos, and Julie Shah. "Human-robot cross-training: Computational formulation, modeling and evaluation of a human team training strategy." In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 33-40. Institute of Electrical and Electro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
128
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 127 publications
(135 citation statements)
references
References 23 publications
(41 reference statements)
1
128
0
Order By: Relevance
“…-Chance: This algorithm chooses actions uniformly at random from the set of possible actions. -Mental-model MDP [33]: We follow Nikolaidis et al and define a MDP to formulate the robot's mental model [33]. In this approach, the human actions are incorporated into the state transition function and the policy specifies only the robot's actions.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…-Chance: This algorithm chooses actions uniformly at random from the set of possible actions. -Mental-model MDP [33]: We follow Nikolaidis et al and define a MDP to formulate the robot's mental model [33]. In this approach, the human actions are incorporated into the state transition function and the policy specifies only the robot's actions.…”
Section: Methodsmentioning
confidence: 99%
“…Therefore, we use the same state and action spaces and reward function as described in our approach with only one agent and compute the transition function from the state action sequences from the training data. Note that in our adaptation of [33] we fix the transition function learned from the data and do not perform any cross training iterations as the roles are fully exchangeable in our collaborative setting.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations