To better support usability practice, most usability research focuses on evaluation methods. New ideas in usability research are mostly proposed as new evaluation methods. Many publications describe experiments that compare methods. Comparisons may indicate that some methods have important deficiencies, and thus often advise usability practitioners to prefer a specific method in a particular situation. An expectation persists in human-computer interaction (HCI) that results about evaluation methods should be the standard "unit of contribution" rather than favoring larger units (e.g., usability work as a whole) or smaller ones (e.g., the impact of specific aspects of a method). This article argues that these foci on comparisons and method innovations ignore the reality that usability evaluation methods are loose incomplete collections of resources, which successful practitioners configure, adapt, and complement to match specific project circumstances. Through a review of existing research on methods and resources, resources associated with specific evaluation methods, and ones that can complement existing methods, or be used separately, are identified. Next, a generic classification scheme for evaluation resources is developed, and the scheme is extended with project specific resources that impact the effective use of methods. With these reviews and analyses in place, implications for research, teaching, and practice are derived. Throughout, the article draws on culinary analogies. A recipe is nothing without its ingredients, and just as the quality of what is cooked reflects the quality of its ingredients, so too does the quality of usability work reflect the quality of resources as configured and combined. A method, like a recipe,
We present an exploration of reading patterns and usability in visualizations of electronic documents. Twenty subjects wrote essays and answered questions about scientific documents using an overview+detail, a fisheye, and a linear interface. We study reading patterns by progression maps that visualize the progression of subjects' reading activity, and by visibility maps that show for how long different parts of the document are visible. The reading patterns help explain differences in usability between the interfaces and show how interfaces affect the way subjects read. With the overview+detail interface, subjects get higher grades for their essays. All but one of the subjects prefer this interface. With the fisheye interface, subjects use more time on gaining an overview of the document and less time on reading the details. Thus, they read the documents faster, but display lower incidental learning. We also show how subjects only briefly have visible the parts of the document that are not initially readable in the fisheye interface, even though they express a lack of trust in the algorithm underlying the fisheye interface. When answering questions, the overview is used for jumping directly to answers in the document and to already-visited parts of the document. However, subjects are slower at answering questions with the overview+detail interface. From the visualizations of the reading activity, we find that subjects using the overview+detail interface often explore the document further even when a satisfactory answer to the given question has already been read. Thus, overviews may grab subjects' attention and possibly distract them.
Usability problems predicted by evaluation techniques are useful input to systems development; it is uncertain whether redesign proposals aimed at alleviating those problems are likewise useful. We present a study of how developers of a large web application assess usability problems and redesign proposals as input to their systems development. Problems and redesign proposals were generated by 43 evaluators using an inspection technique and think aloud testing. Developers assessed redesign proposals to have higher utility in their work than usability problems. In interviews they explained how redesign proposals gave them new ideas for tackling well known problems. Redesign proposals were also seen as constructive and concrete input. Few usability problems were new to developers, but the problems supported prioritizing ongoing development of the application and taking design decisions. No developers, however, wanted to receive only problems or redesigns. We suggest developing and using redesign proposals as an integral part of usability evaluation.
No abstract
BackgroundPatients suffering from depression have a high risk of relapse and readmission in the weeks following discharge from inpatient wards. Electronic self-monitoring systems that offer patient-communication features are now available to offer daily support to patients, but the usability, acceptability, and adherence to these systems has only been sparsely investigated.ObjectiveWe aim to test the usability, acceptability, adherence, and clinical outcome of a newly developed computer-based electronic self-assessment system (the Daybuilder system) in patients suffering from depression, in the period from discharge until commencing outpatient treatment in the Intensive Outpatient Unit for Affective Disorders.MethodsPatients suffering from unipolar major depression that were referred from inpatient wards to an intensive outpatient unit were included in this study before their discharge, and were followed for four weeks. User satisfaction was assessed using semiqualitative questionnaires and the System Usability Scale (SUS). Patients were interviewed at baseline and at endpoint with the Hamilton depression rating scale (HAM-D17), the Major Depression Inventory (MDI), and the 5-item World Health Organization Well-Being Index (WHO-5). In this four-week period patients used the Daybuilder system to self-monitor mood, sleep, activity, and medication adherence on a daily basis. The system displayed a graphical representation of the data that was simultaneously displayed to patients and clinicians. Patients were phoned weekly to discuss their data entries. The primary outcomes were usability, acceptability, and adherence to the system. The secondary outcomes were changes in: the electronically self-assessed mood, sleep, and activity scores; and scores from the HAM-D17, MDI, and WHO-5 scales.ResultsIn total, 76% of enrolled patients (34/45) completed the four-week study. Five patients were readmitted due to relapse. The 34 patients that completed the study entered data for mood on 93.8% of the days (872/930), sleep on 89.8% of the days (835/930), activity on 85.6% of the days (796/930), and medication on 88.0 % of the days (818/930). SUS scores were 86.2 (standard deviation [SD] 9.7) and 79% of the patients (27/34) found that the system lived up to their expectations. A significant improvement in depression severity was found on the HAM-D17 from 18.0 (SD 6.5) to 13.3 (SD 7.3; P<.01), on the MDI from 27.1 (SD 13.1) to 22.1 (SD 12.7; P=.006), and in quality of life on the WHO-5 from 31.3 (SD 22.9) to 43.4 (SD 22.1; P<.001) scales, but not on self-assessed mood (P=.08). Mood and sleep parameters were highly variable from day-to-day. Sleep-offset was significantly delayed from baseline, averaging 48 minutes (standard error 12 minutes; P<.001). Furthermore, when estimating delay of sleep-onset (with sleep quality included in the model) during the study period, this showed a significant negative effect on mood (P=.03)ConclusionsThe Daybuilder systems performed well technically, and patients were satisfied with the system and h...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.