Gaze-based implicit intention inference provides a new human-robot interaction for people with disabilities to accomplish activities of daily living independently. Existing gaze-based intention inference is mainly implemented by the data-driven method without prior object information in intention expression, which yields low inference accuracy. Aiming to improve the inference accuracy, we propose a gaze-based hybrid method by integrating model-driven and data-driven intention inference tailored to disability applications. Specifically, intention is considered as the combination of verbs and nouns. The objects corresponding to the nouns are regarded as intention-interpreting objects and served as prior knowledge, i.e., punished factors. The punished factor considers the object information, i.e., the priority in object selection. Class-specific attribute weighted naïve Bayes model learned through training data is presented to represent the relationship among intentions and objects. An intention inference engine is developed by combining the human prior knowledge, and the data-driven classspecific attribute weighted naïve Bayes model. Computer simulations: (i) verify the contribution of each critical component of the proposed model, (ii) evaluate the inference accuracy of the proposed model, and (iii) show that the proposed method is superior to state-of-the-art intention inference methods in terms of accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.