2015
DOI: 10.1007/978-3-319-23868-5_18
|View full text |Cite
|
Sign up to set email alerts
|

POMDP Based Action Planning and Human Error Detection

Abstract: Part 4: Smart Environments, Agents, and RobotsInternational audienceThis paper presents a Partially Observable Markov Decision Process (POMDP) model for action planning and human errors detection, during Activities of Daily Living (ADLs). This model is integrated into a sub-component of an assistive system designed for stroke survivors; it is called the Artificial Intelligent Planning System (AIPS). Its main goal is to monitor the user’s history of actions during a specific task, and to provide meaningful assi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
4
2

Relationship

2
8

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 22 publications
0
7
0
Order By: Relevance
“…In such a case, one solution is to implement a Partially Observable MDP [54], which will model these uncertainties, and enable the assistive system to act even if the user's state is only partially known from its point of view. Such a system has already been implemented in CogWatch [55,49,56], and was tested with success via simulation. The next step is to let stroke survivors interact with the POMDP-based assistive system and evaluate its performance under uncertainty.…”
Section: Resultsmentioning
confidence: 99%
“…In such a case, one solution is to implement a Partially Observable MDP [54], which will model these uncertainties, and enable the assistive system to act even if the user's state is only partially known from its point of view. Such a system has already been implemented in CogWatch [55,49,56], and was tested with success via simulation. The next step is to let stroke survivors interact with the POMDP-based assistive system and evaluate its performance under uncertainty.…”
Section: Resultsmentioning
confidence: 99%
“…The autonomous systems that attempt to create a realistic model of the humans in the environment must consider the probabilities associated with humans making mistakes [Wu et al, 2017;Jean-Baptiste et al, 2015;Charles et al, 2018]. These systems use Markov chains (MCs) to model the stochastic behaviour of humans, such as their likelihood to miss a piece of litter in a picking exercise [Junges et al, 2018].…”
Section: Related Workmentioning
confidence: 99%
“…Having some knowledge of where the object is being used could also influence the definition of appropriate actions, through Cultural Engagement. Combining inferences drawn from Motor and Morphological Engagement, the object could infer the most likely intention of the person, and use this inference to provide additional cues and guidance (Jean-Baptiste et al, 2015 ).…”
Section: Introductionmentioning
confidence: 99%