2021
DOI: 10.3390/brainsci11010106
|View full text |Cite
|
Sign up to set email alerts
|

Single-Trial Recognition of Video Gamer’s Expertise from Brain Haemodynamic and Facial Emotion Responses

Abstract: With an increase in consumer demand of video gaming entertainment, the game industry is exploring novel ways of game interaction such as providing direct interfaces between the game and the gamers’ cognitive or affective responses. In this work, gamer’s brain activity has been imaged using functional near infrared spectroscopy (fNIRS) whilst they watch video of a video game (League of Legends) they play. A video of the face of the participants is also recorded for each of a total of 15 trials where a trial is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(8 citation statements)
references
References 42 publications
0
7
0
Order By: Relevance
“…The field's high-level technology enabled video game players to share facial expressions, body language clues, and physiological indicators such as the heart beat in real time [70]. In a study conducted by Andreu-Perez et al [71], researchers were able to classify gamers' expertise by deciphering their emotions and brain information relying on their facial expressions. This means that advanced technology implemented in specific fields can possibly be adopted for multidisciplinary purposes, such as enhancing the VCP field.…”
Section: Eye Contact and Body Languagementioning
confidence: 99%
“…The field's high-level technology enabled video game players to share facial expressions, body language clues, and physiological indicators such as the heart beat in real time [70]. In a study conducted by Andreu-Perez et al [71], researchers were able to classify gamers' expertise by deciphering their emotions and brain information relying on their facial expressions. This means that advanced technology implemented in specific fields can possibly be adopted for multidisciplinary purposes, such as enhancing the VCP field.…”
Section: Eye Contact and Body Languagementioning
confidence: 99%
“…Andreu-Perez et al. 89 classified the expertise of subjects watching 30 s clips of the video game League of Legends using fNIRS data and facial expressions. The fully connected deep neural network (FCDNN) and deep classifier autoencoder (DCAE) were compared against many traditional machine learning techniques, including SVM and kNN in a three-class test to determine the skill level of the subject watching.…”
Section: Applications In Fnirsmentioning
confidence: 99%
“…While most studies have focused on using cortical activations to classify when a specified task is being completed, a few studies have begun looking into predicting the skill level of the subject at a given task. Andreu-Perez et al 89 classified the expertise of subjects watching 30 s clips of the video game League of Legends using fNIRS data and facial expressions. The fully connected deep neural network (FCDNN) and deep classifier autoencoder (DCAE) were compared against many traditional machine learning techniques, including SVM and kNN in a threeclass test to determine the skill level of the subject watching.…”
Section: Analysis Of Cortical Activationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Rocket [10] and MiniRocket [12] use random convolution kernels for transforming input time series into a set of features to train a linear classifier, without training the kernels. Both methods have shown fast and accurate time series classification on standard datasets and for different applications in a fraction of the training time of existing methods, such as UCR archive [15], inter-burst detection in electroencephalogram (EEG) signals [16], driver's distraction detection using EEG signals [17], functional near infrared spectroscopy signals classification [18], and human activity recognition [19].…”
Section: Introductionmentioning
confidence: 99%