Fixation identification is an essential task in the extraction of relevant information from gaze patterns; various algorithms are used in the identification process. However, the thresholds used in the algorithms greatly affect their sensitivity. Moreover, the application of these algorithm to eye-tracking technologies integrated into head-mounted displays, where the subject’s head position is unrestricted, is still an open issue. Therefore, the adaptation of eye-tracking algorithms and their thresholds to immersive virtual reality frameworks needs to be validated. This study presents the development of a dispersion-threshold identification algorithm applied to data obtained from an eye-tracking system integrated into a head-mounted display. Rules-based criteria are proposed to calibrate the thresholds of the algorithm through different features, such as number of fixations and the percentage of points which belong to a fixation. The results show that distance-dispersion thresholds between 1–1.6° and time windows between 0.25–0.4 s are the acceptable range parameters, with 1° and 0.25 s being the optimum. The work presents a calibrated algorithm to be applied in future experiments with eye-tracking integrated into head-mounted displays and guidelines for calibrating fixation identification algorithms
Risk taking (RT) measurement constitutes a challenge for researchers and practitioners and has been addressed from different perspectives. Personality traits and temperamental aspects such as sensation seeking and impulsivity influence the individual’s approach to RT, prompting risk-seeking or risk-aversion behaviors. Virtual reality has emerged as a suitable tool for RT measurement, since it enables the exposure of a person to realistic risks, allowing embodied interactions, the application of stealth assessment techniques and physiological real-time measurement. In this article, we present the assessment on decision making in risk environments (AEMIN) tool, as an enhanced version of the spheres and shield maze task, a previous tool developed by the authors. The main aim of this article is to study whether it is possible is to discriminate participants with high versus low scores in the measures of personality, sensation seeking and impulsivity, through their behaviors and physiological responses during playing AEMIN. Applying machine learning methods to the dataset we explored: (a) if through these data it is possible to discriminate between the two populations in each variable; and (b) which parameters better discriminate between the two populations in each variable. The results support the use of AEMIN as an ecological assessment tool to measure RT, since it brings to light behaviors that allow to classify the subjects into high/low risk-related psychological constructs. Regarding physiological measures, galvanic skin response seems to be less salient in prediction models.
Risk taking (RT) is a component of the decision-making process in situations that involve uncertainty and in which the probability of each outcome – rewards and/or negative consequences – is already known. The influence of cognitive and emotional processes in decision making may affect how risky situations are addressed. First, inaccurate assessments of situations may constitute a perceptual bias in decision making, which might influence RT. Second, there seems to be consensus that a proneness bias exists, known as risk proneness, which can be defined as the propensity to be attracted to potentially risky activities. In the present study, we take the approach that risk perception and risk proneness affect RT behaviours. The study hypothesises that locus of control, emotion regulation, and executive control act as perceptual biases in RT, and that personality, sensation seeking, and impulsivity traits act as proneness biases in RT. The results suggest that locus of control, emotion regulation and executive control influence certain domains of RT, while personality influences in all domains except the recreational, and sensation seeking and impulsivity are involved in all domains of RT. The results of the study constitute a foundation upon which to build in this research area and can contribute to the increased understanding of human behaviour in risky situations.
Scholars are increasingly using electrodermal activity (EDA) to assess cognitive-emotional states in laboratory environments, while recent applications have recorded EDA in uncontrolled settings, such as daily-life and virtual reality (VR) contexts, in which users can freely walk and move their hands. However, these records can be affected by major artifacts stemming from movements that can obscure valuable information. Previous work has analyzed signal correction methods to improve the quality of the signal or proposed artifact recognition models based on time windows. Despite these efforts, the correction of EDA signals in uncontrolled environments is still limited, and no existing research has used a signal manually corrected by an expert as a benchmark. This work investigates different machine learning and deep learning architectures, including support vector machines, recurrent neural networks (RNNs), and convolutional neural networks, for the automatic artifact recognition of EDA signals. The data from 44 subjects during an immersive VR task were collected and cleaned by two experts as ground truth. The best model, which used an RNN fed with the raw signal, recognized 72% of the artifacts and had an accuracy of 87%. An automatic correction was performed on the detected artifacts through a combination of linear interpolation and a high degree polynomial. The evaluation of this correction showed that the automatically and manually corrected signals did not present differences in terms of phasic components, while both showed differences to the raw signal. This work provides a tool to automatically correct artifacts of EDA signals which can be used in uncontrolled conditions, allowing for the development of intelligent systems based on EDA monitoring without human intervention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.