Numerous research studies on digital educational games (DEGs) focused on whether they help improve children's learning performance. Nonetheless, only a few studies sought to address how children learn through DEGs. We were motivated to bridge this gap through an empirical study with the eye-tracking methodology. A total of 94 five-year-olds were involved in the study. They were asked to play with a DEG or cardboard game on numeracy. We analysed how fixation duration (a proxy for attention) was related to learning strategies based on the children's achievement level. The main findings include: the DEG did not yield any significant learning effect but its cardboard version did; the children's performance for recall-based tasks were significantly worse than that for recognition-based task; the achievement level played a significant role in varying attention given to the objects of the games. Reflections on applying the eye-tracking methods to young children were also discussed.
Digital Education Games (DEGs) have been used to support children's learning in various domains. A number of existing studies on DEGs has focused on whether they could improve children's learning performance. However, only a few of them have attempted to address the critical question of how young children interact with DEGs. Bridging this gap was the main motivation underpinning this research study. With the use of eye-tracking technology, we explored our research goal by evaluating a bespoke DEG on numeracy and its cardboard version that we developed based on the UK Early Years Foundation Stage (EYFS) framework.A between-subject experiment study involving 94 five-year-olds was conducted. The research protocols and instruments were pilot tested and ethically approved. In analysing the eyetracking data, we refined the Gaze Sub-sequence Marking Scheme to infer children's interaction strategies. Results showed that the difference in the learning effect between the digital and cardboard game was insignificant, that the children's interaction strategies varied significantly with their achievement level, and that children's gender was not a significant factor in determining the impact of learning with the DEG. Implications for rendering eyetracking technology more child-friendly and designing DEGs for young children are drawn.
In this study, we aimed to find an optimized approach to improving facial and masked facial recognition using machine learning and deep learning techniques. Prior studies only used a single machine learning model for classification and did not report optimal parameter values. In contrast, we utilized a grid search with hyperparameter tuning and nested cross-validation to achieve better results during the verification phase. We performed experiments on a large dataset of facial images with and without masks. Our findings showed that the SVM model with hyperparameter tuning had the highest accuracy compared to other models, achieving a recognition accuracy of 0.99912. The precision values for recognition without masks and with masks were 0.99925 and 0.98417, respectively. We tested our approach in real-life scenarios and found that it accurately identified masked individuals through facial recognition. Furthermore, our study stands out from others as it incorporates hyperparameter tuning and nested cross-validation during the verification phase to enhance the model's performance, generalization, and robustness while optimizing data utilization. Our optimized approach has potential implications for improving security systems in various domains, including public safety and healthcare.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.