In video games, the evaluation of the user experience (UX) mainly refers to two main groups of aspects, those that refer to the player that is mainly oriented to make the player feel good while playing and those that refer to the video game that is oriented to make the video game easy to understand and play. The aspects considered that are related to the player are engagement, enjoyment, and flow; the aspects related to video game, usability, and dependability. Virtual reality environments today have changed the paradigm in various fields of application, such as health, education, entertainment, among others. Therefore, it is important to observe the effects of handedness with hand movements in virtual reality environments. This work proposes a model to evaluate and improve the user experience considering player and video game aspects, taking into account handedness with hand movements in virtual reality environments. Player and video game aspects can be added to evaluations of the effect of handedness, especially in virtual reality environments, in order to know the user’s behavior in terms of skill, performance, and accuracy, among other features by using a particular hand to perform specific tasks. Next, a case study is presented with two groups of users using a virtual reality environment to perform several user tasks considering the dominant and non-dominant hand. By evaluating the user tasks it is possible to know the levels of engagement, enjoyment, motivation, and usability in a virtual reality environment. Finally, an analysis of results is presented in which several improvements of UX are presented.
In learning environments emotions can activate or deactivate the learning process. Boredom, stress and happy -learn-related emotions-are included in physiological signals datasets, but not in Facial Expression Recognition (FER) datasets. In addition to this, Galvanic Skin Response (GSR) signal is the most representative data for emotions classification. This paper presents a technique to generate a dataset of facial expressions and physiological signals of spontaneous and acted learnrelated emotions -boredom, stress, happy and neutral state-presented during video stimuli and face acting. We conducted an experiment with 22 participants (Mexicans); a dataset of 1,840 facial expressions images and 1,584 GSR registers were generated 1 . A Convolutional Neural Network (CNN) model was trained with the facial expression dataset, then statistical analysis was performed with the GSR dataset. MobileNet's CNN reached an overall accuracy of 94.36% in a confusion matrix, but the accuracy decreased to 28% for non-trained external images. The statistical results of GSR with significant differences in confused emotions are discussed.
The video game and entertainment industry has been growing in recent years, particularly those related to Virtual Reality (VR). Therefore, video game creators are looking for ways to offer and improve realism in their applications in order to improve user satisfaction. In this sense, it is of great importance to have strategies to evaluate and improve the gaming experience in a group of people, without considering the fact that users have different preferences and, coupled with this, also seeks to achieve satisfaction in each user. In this work, we present a model to improve the user experience in a personal way through reinforcement learning (RL). Unlike other approaches, the proposed model adjusts parameters of the virtual environment in real-time based on user preferences, rather than physiological data or performance. The model design is based on the Model-Driven Architecture (MDA) approach and consists of three main phases: analysis phase, design phase, and implementation phase. As results, a simulation experiment is presented that shows the transitions between undesired satisfaction states to desired satisfaction states, considering an approach in a personal way.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.