1997
DOI: 10.1002/(sici)1097-4571(199711)48:11<1004::aid-asi4>3.0.co;2-#
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating a multimedia authoring tool

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
8
0

Year Published

2007
2007
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(9 citation statements)
references
References 23 publications
1
8
0
Order By: Relevance
“…One is to match descriptions of usability problems at varying degrees of similarity. John and Mashyna (1997) distinguished a precise and a vague hit between usability problems; Connell et al (2004) similarly distinguished a hit from a possible hit. While matching in degrees reduces part of the uncertainty of matching, it is not a general solution because it does not remove most of the difficulties discussed above.…”
Section: Related Worksupporting
confidence: 61%
“…One is to match descriptions of usability problems at varying degrees of similarity. John and Mashyna (1997) distinguished a precise and a vague hit between usability problems; Connell et al (2004) similarly distinguished a hit from a possible hit. While matching in degrees reduces part of the uncertainty of matching, it is not a general solution because it does not remove most of the difficulties discussed above.…”
Section: Related Worksupporting
confidence: 61%
“…John and Mashyna (1997) provided examples taken from studies comparing evaluation methods at the problem type level, without data on problem tokens or where problem types cover disparate problem tokens. They argued that these studies find a large overlap between sets of problems, possibly leading to an overrating of the success of methods.…”
Section: Dogma No 2: Matching Problem Descriptions Is Straightforwardmentioning
confidence: 99%
“…Studies have also suggested that the extent to which evaluators are guided by heuristics is uncertain (Doubleday et al 1997): 'It is difficult to know how far the heuristics guided the evaluators, even though they were given a detailed synopsis of the meaning of each heuristic' (p. 107). John and Mashyna (1997) found, similarly to the study by Jeffries quoted above, that only about two-thirds of the problems found during a cognitive walkthrough were actually that the evaluator attributes to the technique. Regarding empirical evaluation methods, several studies suggest that think-aloud practice differs markedly from prescriptions (e.g.…”
Section: '[T]he User Action Framework Allows Usability Engineers To Nmentioning
confidence: 99%
“…To identify UPs in three aspects: game interface (e.g., button, navigation), game mechanism (e.g., ease of use/control, learnability), and gameplay (e.g., goal, responsiveness). Criteria for UPs were applied (John and Mashyna, 1997).…”
Section: Section 3: Adaptivitymentioning
confidence: 99%
“…Usability: Observations were a crucial technique for identifying usability problems (UPs), and traditional criteria for UPs were applied (John and Mashyna, 1997). A preliminary list of UPs was verified against the videorecordings.…”
Section: Revisit Of Four-dimensional Evaluation Frameworkmentioning
confidence: 99%