In a meta-analysis of 43 studies, we examined the effects of multimodal feedback on user performance, comparing visualauditory and visual-tactile feedback to visual feedback alone. Results indicate that adding an additional modality to visual feedback improves performance overall. Both visual-auditory feedback and visual-tactile feedback provided advantages in reducing reaction times and improving performance scores, but were not effective in reducing error rates. Effects are moderated by task type, workload, and number of tasks. Visual-auditory feedback is most effective when a single task is being performed (g = .87), and under normal workload conditions (g = .71). Visual-tactile feedback is more effective when multiple tasks are begin performed (g = .77) and workload conditions are high (g = .84). Both types of multimodal feedback are effective for target acquisition tasks; but vary in effectiveness for other task types. Implications for practice and research are discussed.
Information display systems have become increasingly complex and more difficult for human cognition to process effectively. Based upon Wicken's Multiple Resource Theory (MRT), information delivered using multiple modalities (i.e., visual and tactile) could be more effective than communicating the same information through a single modality. The purpose of this metaanalysis is to compare user effectiveness when using visual-tactile task feedback (a multimodality) to using only visual task feedback (a single modality). Results indicate that using visual-tactile feedback enhances task effectiveness more so than visual feedback (g = .38). When assessing different criteria, visualtactile feedback is particularly effective at reducing reaction time (g = .631) and increasing performance (g = .618). Follow up moderator analyses indicate that visual-tactile feedback is more effective when workload is high (g = .844) and multiple tasks are being performed (g = .767). Implications of results are discussed in the paper.
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing the burden, to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. REPORT DATE (DD-MM-YYYY) March 20072. REPORT TYPE ARL-TR-4068 SPONSOR/MONITOR'S ACRONYM(S) 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) SPONSOR/MONITOR'S REPORT NUMBERS DISTRIBUTION/AVAILABILITY STATEMENTApproved for public release; distribution is unlimited. SUPPLEMENTARY NOTES ABSTRACTThe purpose of this report is to describe the development of a framework to enable classification, evaluation, and comparison of multimodal display research, based on task demands, display characteristics, research design, and individual differences. In this report, we describe the process by which a bibliographic database was developed and organized. First, the framework was specified, which then guided the identification and review of research and theory-based articles that were included in the bibliography. The results of the overall effort, the multimodal framework and article tracking sheet, bibliographic database, and searchable multimodal database make substantial and valuable contributions to the accumulation and interpretation of multimodal research. References collected in this effort are listed in the appendix. iii SUBJECT TERMS
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.