2014 IEEE International Conference on Mechatronics and Automation 2014
DOI: 10.1109/icma.2014.6885705
|View full text |Cite
|
Sign up to set email alerts
|

Implicit human intention inference through gaze cues for people with limited motion ability

Abstract: The promising assistive technologies bring the hope that enlightens the independent daily living for the elderly and disabled people. However, most modern human-machine communication means is not affordable to those people with very limited motion ability to effectively express their service requests. In the paper, we presented a novel interaction framework which can facilitate the communication between human and assistive devices. In the framework, human intention is inferred implicitly by monitoring the gaze… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 25 publications
0
10
0
Order By: Relevance
“…Although capturing and predicting user behavior is time-consuming and hard to log, numerous studies tried to extract behavioral intentions using classification, clustering, and statistical techniques. Li et al [21] presented a novel interactive framework to facilitate the communication between human and assistive device. It was used to reduce most elderly and disabled people's effort to interact with machine based on gaze movements.…”
Section: Behavioral Intentionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Although capturing and predicting user behavior is time-consuming and hard to log, numerous studies tried to extract behavioral intentions using classification, clustering, and statistical techniques. Li et al [21] presented a novel interactive framework to facilitate the communication between human and assistive device. It was used to reduce most elderly and disabled people's effort to interact with machine based on gaze movements.…”
Section: Behavioral Intentionsmentioning
confidence: 99%
“…In [20], authors proposed a supervised query intended for kids (QuIK) model to facilitate children of 6-15 age to formulate their query in search engines, which lead to more concise and relevant search results. In [21], a novel approach semi-supervised sequence clustering has been presented to extract and group interaction sequences of users, then assign the predefined task and visualize intuitively. Recommendation (MEIR) was proposed to recommend user intention according to the previous history automatically.…”
Section: Supervised Learningmentioning
confidence: 99%
“…As eye movement requires minimal physical effort by the user, gaze behaviour has been widely researched in the context of elderly care and assistive technology [89,90]. Li and Zhang designed a framework to provide assistive systems with the capability of understanding the user's intention implicitly by monitoring their overt visual attention [91]. For example, if the user gazes at a specific location for a pre-determined amount of time, the robot interprets it as an implicit signal conveying the need for further information [92].…”
Section: Communicative Gaze Behaviourmentioning
confidence: 99%
“…1) Intent Inference based on the Eye Modality: We formulate the human intent inference using the eye modality (1) similar to our previous work [28]- [30], and T is a set of accessible objects that could be the target and E is a sequence of eye-gaze data measured since eyes dwelling. In other words, the robotic agent infers the human intent T e that maximizes the posterior probability while knowing a sequence of eyegaze data.…”
Section: A Complementary Intent Inferencementioning
confidence: 99%
“…In the simulation, the human input was synthesized to have a curved approaching trajectory to the target. The simulated human input at each time step t was synthesized as x t in ( 27)- (28), and the prime symbol indicates it was a synthesized human input. x t 's magnitude was defined by a constant A , and its direction was obtained by rotating the direction vector, − → u t that pointed to the target from the current robot location, an angle θ t .…”
Section: A Simulation Setupmentioning
confidence: 99%