This article describes an emerging approach to the design of human-machine systems referred to as "neuroadaptive interface technology". A neuroadaptive interface is an ensemble of computer-based displays and controls whose functional characteristics change in response to meaningful variations in the user's cognitive and/or emotional states. Variations in these states are indexed by corresponding central nervous system activity, which control functionally adaptive modifications to the interface. The purpose of these modifications is to promote safer and more effective human-machine system performance. While fully functional adaptive interfaces of this type do not currently exist, there are promising steps being taken toward their development, and great potential value in doing so - value that corresponds directly to and benefits from a neuroergonomic approach to systems development. Specifically, it is argued that the development of these systems will greatly enhance overall human-machine system performance by providing more symmetrical communications between users and computer-based systems than currently exist. Furthermore, their development will promote a greater understanding of the relationship between nervous system activity and human behaviour (specifically work-related behaviour), and as such may serve as an exemplary paradigm for neuroergonomics. A number of current research and development areas related to neuroadaptive interface design are discussed, and challenges associated with the development of this technology are described
In an effort to better understand the learning potential of a tangible interface, we conducted a comparison study between a tangible and a traditional graphical user interface for teaching preschoolers (In Portugal, children enter preschool at the age of three and they attend it till entering school, normally at the age of six) about good oral hygiene. The study was carried with two groups of children aged 4 to 5 years. Questionnaires to parents, children's drawings, and interviews were used for data collection and analysis and revealed important indicators about children's change of attitude, involvement, and preferences for the interfaces. The questionnaires showed a remarkable change of attitude toward tooth brushing in the children that interacted with the tangible interface; particularly children's motivation increased significantly. Children's drawings were used to assess their degree of involvement with the interfaces. The drawings from the children that interacted with the tangible interface were very complete and detailed suggesting that the children felt actively involved with the experience. The results suggest that the tangible interface was capable of promoting a stronger and long-lasting involvement having a greater potential to engage children, therefore potentially promoting learning. Evaluation through drawing seems to be a promising method to work with preliterate children; however, it is advisable to use it together with other methods.
The study of users' emotional behavior in the HumanComputer Interaction (HCI) field has received increasing attention during the last few years. Our work in this area focuses on the relationship between user emotions and perceived usability problems. Specifically, we propose to observe users' spontaneous facial expressions as a method to identify adverse-event occurrences at the user interface level.This paper reports on the results of an experiment designed to investigate the association between adverse-event occurrences during a word processing task and users' facial expressions monitored using electromyogram (EMG) sensor devices. The results suggest that an increase of task difficulty is related to an increase in specific facial muscle activity, thus creating a baseline for future developments using camera-based monitoring of facial activities.
Abstract. The radial undistortion model proposed by Fitzgibbon and the radial fundamental matrix were early steps to extend classical epipolar geometry to distorted cameras. Later minimal solvers have been proposed to find relative pose and radial distortion, given point correspondences between images. However, a big drawback of all these approaches is that they require the distortion center to be exactly known. In this paper we show how the distortion center can be absorbed into a new radial fundamental matrix. This new formulation is much more practical in reality as it allows also digital zoom, cropped images and camera-lens systems where the distortion center does not exactly coincide with the image center. In particular we start from the setting where only one of the two images contains radial distortion, analyze the structure of the particular radial fundamental matrix and show that the technique also generalizes to other linear multi-view relationships like trifocal tensor and homography. For the new radial fundamental matrix we propose different estimation algorithms from 9,10 and 11 points. We show how to extract the epipoles and prove the practical applicability on several epipolar geometry image pairs with strong distortion that -to the best of our knowledge -no other existing algorithm can handle properly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.