REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. AGENCY USE ONLY (Leave blank)2. REPORT DATE February 1998 REPORT TYPE AND DATES COVEREDInterim Report: April 1996 to February 1997 TITLE AND SUBTITLE Foundations for an Empirically Determined Scale of Trust in Automated Systems AUTHOR(S)Jiun-Yin Jian, Ann M. Bisantz, Colin G. Drury, James Llinas PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)Center AFRL-HE-WP-TR-2000-0102 SUPPLEMENTARY NOTES 12a. DISTRIBUTION AVAILABILITY STATEMENTApproved for public release; distribution is unlimited. 12b. DISTRIBUTION CODE ABSTRACT (Maximum 200 words)One component in the successful use of automated systems is the extent to which people trust the automation to perform effectively. In order to understand the relationship between trust in computerized systems and the use of those systems, we need to be able to effectively measure trust. Although questionnaires regarding trust have been used in prior studies, these questionnaires were theoretically rather than empirically generated and did not distinguish between three potentially different types of trust: human-human trust, human-machine trust, and trust in general. A three-phased experiment, comprising a word elicitation study, a questionnaire study, and a paired comparison study was performed, in order to better understand similarities and differences in the concepts of trust and distrust, and between the different types of trust. Results indicated that trust and distrust can be considered opposites, rather than comprising different concepts. Components of trust, in terms of words related to trust, were similar across the three types of trust. Results obtained from a cluster analysis were used to identify 12 potential factors of trust between people and automated systems. These 12 factors were then used to develop a proposed scale to measure trust in automation.
We conducted a classification analysis to identify factors associated with sitting comfort and discomfort. The objective was to investigate the possible multidimensional nature of comfort and discomfort. Descriptors of feelings of comfort and discomfort were solicited from office workers and validated in a questionnaire study. From this study, 43 descriptors emerged. The 42 participants rated the similarity of all 903 pairs of descriptors, and we subjected the resulting similarity matrix to multidimensional scaling, factor analysis, and cluster analysis. Two main factors emerged, which were interpreted as comfort and discomfort. Based on these findings, we postulate a hypothetical model for perception of comfort and discomfort. Comfort and discomfort need to be treated as different and complementary entities in ergonomic investigations.
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. AGENCY USE ONLY (Leave blank)2. REPORT DATE February 1998 REPORT TYPE AND DATES COVEREDInterim Report: April 1996 to February 1997 TITLE AND SUBTITLE Foundations for an Empirically Determined Scale of Trust in Automated Systems AUTHOR(S)Jiun-Yin Jian, Ann M. Bisantz, Colin G. Drury, James Llinas PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)Center AFRL-HE-WP-TR-2000-0102 SUPPLEMENTARY NOTES 12a. DISTRIBUTION AVAILABILITY STATEMENTApproved for public release; distribution is unlimited. 12b. DISTRIBUTION CODE ABSTRACT (Maximum 200 words)One component in the successful use of automated systems is the extent to which people trust the automation to perform effectively. In order to understand the relationship between trust in computerized systems and the use of those systems, we need to be able to effectively measure trust. Although questionnaires regarding trust have been used in prior studies, these questionnaires were theoretically rather than empirically generated and did not distinguish between three potentially different types of trust: human-human trust, human-machine trust, and trust in general. A three-phased experiment, comprising a word elicitation study, a questionnaire study, and a paired comparison study was performed, in order to better understand similarities and differences in the concepts of trust and distrust, and between the different types of trust. Results indicated that trust and distrust can be considered opposites, rather than comprising different concepts. Components of trust, in terms of words related to trust, were similar across the three types of trust. Results obtained from a cluster analysis were used to identify 12 potential factors of trust between people and automated systems. These 12 factors were then used to develop a proposed scale to measure trust in automation.
Improvements in workplace, working posture, and discomfort need to be justified in terms of improvements in performance. Previously, a visual inspection task has been investigated. The objective of the current study was to demonstrate the interactions between workplace, work duration, discomfort, working posture, as well as performance in a 2-h typing task. Three levels of keyboard heights were used to change working posture (e.g. joint angles and postural shifts), and thus presumably discomfort (e.g. rating of perceived discomfort and body part discomfort), and performance (e.g. typing speed, error rate and error correction rate). The results indicated that the hypothesized posture-comfort-performance interrelationships were partially supported. Keyboard height had effects on working posture adopted. As in previous studies, the rate of postural shift was a good indication of discomfort in a VDT task. Discomfort and postural shift rate had adverse effects on performance (e.g. error rate). However, these effects on error rate may not be strong.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.