Proceedings of the 6th Conference on Designing Interactive Systems 2006
DOI: 10.1145/1142405.1142439
|View full text |Cite
|
Sign up to set email alerts
|

What do usability evaluators do in practice?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
20
0

Year Published

2008
2008
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 101 publications
(21 citation statements)
references
References 26 publications
1
20
0
Order By: Relevance
“…We conducted the study in a separate session with each subject. At the beginning of each session, we asked the participants to 'think aloud' [47,50,81] during the study. Specifically, we asked them to verbally describe what they were doing, to comment on any of their concerns, and to say whatever comes to their mind while solving the given tasks.…”
Section: Study Process and Data Collectionmentioning
confidence: 99%
“…We conducted the study in a separate session with each subject. At the beginning of each session, we asked the participants to 'think aloud' [47,50,81] during the study. Specifically, we asked them to verbally describe what they were doing, to comment on any of their concerns, and to say whatever comes to their mind while solving the given tasks.…”
Section: Study Process and Data Collectionmentioning
confidence: 99%
“…Several code improvements techniques and protocols have been used for AVISPA refactoring. The think-aloud method [13] has been used in order to retrieve software user experience, AVISPA was assessed with this methodology as an exploratory research of tool inner construction, code concepts and architectural decisions. Also, allows to retrieve the software process models experts' knowledge and opinion.…”
Section: Avispa Code Improvement Techniquesmentioning
confidence: 99%
“…It is also a regular topic in practitioner discourse [87,64]. Equally important and perhaps more extensive has been the topic of 'thinkaloud' protocols and their implementation in testing [53,10,73,6,15,52], along with some interest in the divergences between academic and practitioner uses of such techniques [67]. Boren and Ramey [6] in particular offer one of the most detailed accounts of what testing involves from the perspective of moderator-participant interactions.…”
Section: Lab-based Usability Testing In Hci Research and Ux Practicementioning
confidence: 99%
“…For instance, Boren and Ramey argue for the theoretical grounding of usability testing practices, pointing to deficiencies in the conduct of usability practitioners in moderating tests [6]. Nørgaard and Hornbaek, who do not seek to "reprehend the practice of usability testing", nevertheless still sound a note of concern over practitioners' work regarding absences of systematic analysis and potential bias 3 (e.g., in the anticipation of usability problems and their shaping the design of test tasks) [73]. In some ways it is unsurprising that this framing is applied to industrial applications of usability testing, since a significant portion of HCI research also treats usability testing performed in academic research environments in a normative scientific frame: one need only refer to the well-known "damaged merchandise" discussions of the 1990s [44,74] in which this orientation to 'usability as normal science' has been litigated (and the confusions around "validat[ing] discount methods" [16]).…”
Section: Conceptualising Usability Testingmentioning
confidence: 99%