2006
DOI: 10.1007/11965152_5
|View full text |Cite
|
Sign up to set email alerts
|

Combining User Modeling and Machine Learning to Predict Users’ Multimodal Integration Patterns

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2006
2006
2019
2019

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 20 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…There has been significant work attempting to model users of technology through ML methods (see Webb, Pazzani, and Billsus [2001] for a review), but with little to no attention paid to how such models can inform system design. For example, Huang, Oviatt, and Lunsford (2006) developed a user model based on ML to better predict users' multimodal integration patterns via speech and a pen to develop a system that dynamically responded to user behaviors. However, even though they were successful in developing a predictive model, this model was never actually applied to the design of a system.…”
Section: Design Applications Of MLmentioning
confidence: 99%
“…There has been significant work attempting to model users of technology through ML methods (see Webb, Pazzani, and Billsus [2001] for a review), but with little to no attention paid to how such models can inform system design. For example, Huang, Oviatt, and Lunsford (2006) developed a user model based on ML to better predict users' multimodal integration patterns via speech and a pen to develop a system that dynamically responded to user behaviors. However, even though they were successful in developing a predictive model, this model was never actually applied to the design of a system.…”
Section: Design Applications Of MLmentioning
confidence: 99%
“…Integration patterns are also investigated in [3,4], were machine learning based approaches to predict a user's integration pattern were presented, based on the data of [7]. It is shown that 15 samples per user are enough to predict a user's integration pattern with an accuracy of 81%, so integration patterns can be detected relatively fast, even in an automated way.…”
Section: Related Workmentioning
confidence: 99%
“…The main measures, that may be relevant, are the input order (touch before speech, or speech before touch) and the time distances between the two modalities. The analyzed 11 participants split up into two groups, namely those, that always started with speech input (7), and those that always started with touch input (4). None of them changed the input order during the experiment.…”
Section: Related Workmentioning
confidence: 99%
“…speech and gestures should ideally be adapted to the user and the context (see [Huang et al 2006] for a machine learning approach to that).…”
Section: Recognizing Speech and Gestures 17mentioning
confidence: 99%