One of the main results from the HCI research field in the last decade is a range of usability evaluation methods, some of which have been adopted in software development practice. The methods have matured considerably and they have been scrutinized and compared -with quite varying results. This poster identifies four levels of evidence in what we know and don't know about usability evaluation methods. Firstly, reports on individual usability evaluation methods (including background, theoretical underpinnings, experience and prescriptive guides). Secondly, systematic studies of individual usability evaluation methods. Thirdly, comparative studies of the relative strengths and weaknesses of the methods. Finally, surveys extracting the commonalities of the comparative studies. The first level comprises numerous, but scattered reports. The second and third levels comprise a number of reports, while no reports so far -to the best of our knowledge -have addressed the fourth level. Based on these observations, the poster identifies areas in need of further scrutiny before we have a coherent body of knowledge on usability evaluation methods that serves practitioners' needs and meets researchers' requirements to methodological rigour.The Kentucky Interface Preference Inventory (KIPI) is a 40&m forced-choice questionnaire that assesses user preference for (1) manual vs. vocal controls, (2) continuous vs. discrete controls, (3) visual vs. auditory displays, (4) analog vs. verbal displays, and (5) iconic vs. textual displays. The KIPI also includes a context sensitivity scale which determines how often respondents select interfaces that match the designs known to enhance user performance in the specific context described in the item (i.e., for the specific task or setting). In the present study, we tried to increase respondents' context sensitivity by explicitly telling them to select designs that should enhance the performance of the majority of users. That is, we wanted to determine if instructions emphasizing performance could lead our respondents to become better "intuitive ergonomists."The KIPI was administered twice to 104 respondents (test-retest interval 7-10 days). Half of the respondents were given general preference ("no wrong answer") instructions both times; the rest received usability instructions at Time 2. The group receiving usability instructions changed their responses more than did the control group; however, they did not achieve greater context sensitivity. Instead, the usability instructions caused overall preference shifts toward vocal rather than manual control, auditory rather than visual displays, discrete rather than continuous control, verbal rather than analog displays, and iconic rather than text symbology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.