2005
DOI: 10.1007/s11257-005-1269-8
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Issues of User Model Transparency and Proactive Behaviour in an Office Environment Control System

Abstract: It is important that systems that exhibit proactive behaviour do so in a way that does not surprise or frustrate the user. Consequently, it is desirable for such systems to be both personalised and designed in such a way as to enable the user to scrutinise her user model (part of which should hold the rules describing the behaviour of the system). This article describes on-going work to investigate the design of a prototype system that can learn a given user's behaviour in an office environment in order to use… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
48
0
2

Year Published

2006
2006
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 61 publications
(50 citation statements)
references
References 21 publications
0
48
0
2
Order By: Relevance
“…Dzindolet et al (2003) found that explaining why errors might occur increased trust in automated tools, even when this was not appropriate. Therefore, transparency and task performance can influence each other negatively (see also Cheverst et al 2005). Bilgic and Mooney (2005) note that explanations should not aim just to promote ('sell') a system's recommendations, but that explanations should enable the user to accurately assess recommendation quality.…”
Section: Transparency Of User-adaptive Systems and Study Hypothesesmentioning
confidence: 99%
“…Dzindolet et al (2003) found that explaining why errors might occur increased trust in automated tools, even when this was not appropriate. Therefore, transparency and task performance can influence each other negatively (see also Cheverst et al 2005). Bilgic and Mooney (2005) note that explanations should not aim just to promote ('sell') a system's recommendations, but that explanations should enable the user to accurately assess recommendation quality.…”
Section: Transparency Of User-adaptive Systems and Study Hypothesesmentioning
confidence: 99%
“…To counter this, context-aware applications should be intelligible (also called transparent, comprehensible, scrutable) by providing explanations of their behavior [1]. Indeed, there have already been several contextaware applications that support some level of intelligibility (e.g., [2,17,18,19,22]). These systems support a limited set of explanations users can ask for: What, Certainty, Inputs, Why, and Why Not.…”
Section: Introductionmentioning
confidence: 99%
“…Intelligible Context Models Several papers have established and articulated the need for intelligible models of context [4,6,21]. A few systems have also explored support for intelligible context.…”
Section: Limitationsmentioning
confidence: 99%
“…A few systems have also explored support for intelligible context. Cheverst et al [6] supported users with scrutable decision tree rules and context histories. The PersonisAD [2] framework allowed developers to access supporting evidence for context items.…”
Section: Limitationsmentioning
confidence: 99%