1991
DOI: 10.1080/01449299108924272
|View full text |Cite
|
Sign up to set email alerts
|

A framework for human factors evaluation

Abstract: Successful human factors evaluation of interactive computer systems has tended to relyheavily on the experience of the practitioner, who has had little explicit support on which to draw. This paper concerns support for evaluation in the form of a framework for describing and guiding the general activity. The paper starts with a critique of current approaches to evaluation, and particularly of evaluation within the 'design for usability' approach. Following a definition of evaluation, a framework is proposed th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
52
0
7

Year Published

1996
1996
2017
2017

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 98 publications
(61 citation statements)
references
References 10 publications
1
52
0
7
Order By: Relevance
“…In particular, sampling the objects that are present in the real-life environment and that are relevant for users to achieve their goals seems to be more important for task representativeness than the representativeness of the prototype. This finding is consistent with earlier findings about representative design (Whitefield, Wilson, & Dowell, 1991;Sefelin, Tscheligi, & Giller, 2003).…”
Section: Summary Of the Resultssupporting
confidence: 83%
“…In particular, sampling the objects that are present in the real-life environment and that are relevant for users to achieve their goals seems to be more important for task representativeness than the representativeness of the prototype. This finding is consistent with earlier findings about representative design (Whitefield, Wilson, & Dowell, 1991;Sefelin, Tscheligi, & Giller, 2003).…”
Section: Summary Of the Resultssupporting
confidence: 83%
“…Whitefield, Wilson, and Dowell (1991) suggest that evaluation in the human factors discipline involves an assessment of the conformity between a system's performance and its desired performance. In many cases, practitioners may assume that evaluation is a single event that occurs at or near the end of the design process.…”
Section: Usability Evaluation Methodsmentioning
confidence: 99%
“…The five dimensions include: (1) formative versus summative, (2) discovery versus decision, (3) formalized versus informal, (4) designer involvement versus user involvement, and (5) complete and component. Whitefield, Wilson, and Dowell (1991) provided a framework for human factors evaluation using four categories: (1) analytic methods, (2) specialist reports, (3) user reports, and (4) …”
Section: Usability Evaluation Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Nielsen [8] stated, as a first rule of usability, "don't listen to users" and argued that users' design feedback should be limited to preference data after having used the interactive system in question. Users' design feedback may be biased due to a desire to report what the evaluator wants to hear, imperfect memory, and rationalization of own behaviour [8,9]. As discussed by Gould and Lewis [10], it can be challenging to elicit useful design information from users as they may not have considered alternative approaches or may be ignorant of relevant alternatives; users may simply be unaware of what they need.…”
Section: Researchmentioning
confidence: 99%