CHI '09 Extended Abstracts on Human Factors in Computing Systems 2009
DOI: 10.1145/1520340.1520619
|View full text |Cite
|
Sign up to set email alerts
|

User experience evaluation in the wild

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(11 citation statements)
references
References 5 publications
0
11
0
Order By: Relevance
“…In particular, constructing an accurate understanding of what takes place during the experiment is a major challenge when conducting studies in the wild. The issues one encounters include users anticipating and modifying their behavior to satisfy the researcher's needs [5] [17], discrepancies between self-reflection [11] and anticipated needs [23] and the logged use of the system, and analyzing the large scale, heterogeneous data captured during the study [15]. Clearly, no single evaluation method can account for all of these concerns, and researchers have to choose from and adapt existing approaches to fit the unique needs of their studies.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, constructing an accurate understanding of what takes place during the experiment is a major challenge when conducting studies in the wild. The issues one encounters include users anticipating and modifying their behavior to satisfy the researcher's needs [5] [17], discrepancies between self-reflection [11] and anticipated needs [23] and the logged use of the system, and analyzing the large scale, heterogeneous data captured during the study [15]. Clearly, no single evaluation method can account for all of these concerns, and researchers have to choose from and adapt existing approaches to fit the unique needs of their studies.…”
Section: Related Workmentioning
confidence: 99%
“…Experimenting in real-world situations, while technologically and methodologically demanding, can inform us about the real usage of the new technologies [15], especially when attempting to understand the context-dependent factors such as mobility and the effects of environment [17]. However, a key challenge in conducting these evaluations is that different assessment goals can make it difficult to select the appropriate evaluation method [2].…”
Section: Introductionmentioning
confidence: 99%
“…In another example, Jambon and Meillon [27] equipped a group of skiers with a self-performance system, which relayed usage data from a camera, accelerometer and GPS back to the researchers via wireless Internet.…”
Section: Lab Studiesmentioning
confidence: 99%
“…That is perhaps also why the labels "in-situ" and "in-the-wild" have been adapted by some papers (e.g. [8,12,24,27,54,55,63]) as they are really much better at capturing the essence of what field studies should be about. So, just like a lab study without control and replicability would be considered a poor one, a field study that does not really take the researcher into an uncontrolled real world situation is perhaps not a good one either.…”
Section: Beyond Non-wild Snap-shot Field Studiesmentioning
confidence: 99%
“…However, the observer's presence can disturb the users and a human observer can miss quite a lot of what is really happening. Logging and recording, to recreate the experience later on can be used as in [8]. Those methods give a perfect but partial rendering of the experience, and they still require a long time to be analyzed afterwards, and much equipment and organization during the evaluation.…”
Section: State Of the Art / Previous Workmentioning
confidence: 99%