The recent outbreak of the coronavirus disease of 2019 (Covid-19) has been causing many disruptions among the education systems worldwide, most of them due to the abrupt transition to online learning. The sudden upsurge in digital electronic devices usage, namely personal computers, laptops, tablets and smartphones is unprecedented, which leads to a new wave of both mental and physical health problems among students, for example eye-related illnesses. The overexposure to electronic devices, extended screen time usage and lack of outdoor sunlight have put a consequential strain on the student's ophthalmic health because of their young age and a relative lack of responsibility on their own health. Failure to take appropriate external measures to mitigate the negative effects of this process could lead to common ophthalmic illnesses such as myopia or more serious conditions. To remedy this situation, we propose a software solution that is able to track and capture images of its users' eyes to detect symptoms of eye illnesses while simultaneously giving them warnings and even offering treatments. To meet the requirements of a small and light model that is operable on low-end devices without information loss, we optimized the original MobileNetV2 model with depth-wise separable convolutions by altering the parameters in the last layers with an aim to minimize the resizing of the input image and obtained a new model which we call EyeNet. Combined with applying the knowledge distillation technique and ResNet-18 as a teacher model to train the student model, we have successfully increased the accuracy of the EyeNet model up to 87.16% and support the development of a model compatible with embedded systems with limited computing power, accessible to all students.
Our research is the design of a traffic signal violation detection system using machine learning that learns to prevent the increasing number of road accidents. The system is optimized in terms of accuracy by using the region of interest and location of the vehicle with a red-signal state. By modifying some parameters in the YOLOV5s and re-training the COCO dataset, we can create a model which can be predicted with a high accuracy of 82% for vehicle identification, 90% for traffic signal status change and up to 86% for violation detection. This can be used for red light violation detection which will help the traffic police on traffic management.
Many education facilities have recently switched to online learning due to the COVID-19 pandemic. The nature of online learning makes it easier for dishonest behaviors, such as cheating or lying during lessons. We propose a new artificial intelligence - powered solution to help educators solve this rising problem for a fairer learning environment. We created a visual representation contrastive learning method with the MobileNetV2 network as the backbone to improve predictability from an unlabeled dataset which can be deployed on low power consumption devices. The experiment shows an accuracy of up to 59%, better than several previous research, proving the usability of this approach.
Babies who can’t communicate through language use crying as a way to express themselves. By identifying the unique characteristics of their cries, parents can quickly meet their needs and ensure their health. This study aimed to create a lightweight deep learning model called Bbcry to classify the cries of babies and determine their needs, such as hunger, pain, normal, deafness, or asphyxia. The model was trained using the Chillanto dataset and underwent three stages of development. Initially, the Wav2Vec 2.0 model was utilized as a teacher for the Knowledge Distillation (KD) method and applied to the transformer and prediction layers to reduce the number of required parameters. Then, a projection head layer was added and linked to the transformer layers to control their impact on the Wav2Vec 2.0 model. This resulted in the first version of the Bbcry model with an accuracy of 93.39% and an F1-score of 87.60%. Finally, the number of transformer layers was reduced to create the Bbcry-v4 model with only 9.23 million parameters, which used only 10% of the parameters of Wav2Vec 2.0 while only slightly reducing accuracy and F1-score. The study concludes with a software demonstration that shows the proposed model’s ability to accurately recognize and determine the needs of infants based on their cries.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.