During the COVID-19 pandemic, many offline activities are turned into online activities via video meetings to prevent the spread of the COVID 19 virus. In the online video meeting, some micro-interactions are missing when compared to direct social interactions. The use of machines to assist facial expression recognition in online video meetings is expected to increase understanding of the interactions among users. Many studies have shown that CNN-based neural networks are quite effective and accurate in image classification. In this study, some open facial expression datasets were used to train CNN-based neural networks with a total number of training data of 342,497 images. This study gets the best results using ResNet-50 architecture with Mish activation function and Accuracy Booster Plus block. This architecture is trained using the Ranger and Gradient Centralization optimization method for 60000 steps with a batch size of 256. The best results from the training result in accuracy of AffectNet validation data of 0.5972, FERPlus validation data of 0.8636, FERPlus test data of 0.8488, and RAF-DB test data of 0.8879. From this study, the proposed method outperformed plain ResNet in all test scenarios without transfer learning, and there is a potential for better performance with the pre-training model. The code is available at https://github.com/yusufrahadika-facial-expressions-essay.
This research aimed to evaluate the performance of the A Lite BERT (ALBERT), efficiently learning an encoder that classifies token replacements accurately (ELECTRA) and a robust optimized BERT pretraining approach (RoBERTa) models to support the development of the Indonesian language question and answer system model. The evaluation carried out used Indonesian, Malay and Esperanto. Here, Esperanto was used as a comparison of Indonesian because it is international, which does not belong to any person or country and this then make it neutral. Compared to other foreign languages, the structure and construction of Esperanto is relatively simple. The dataset used was the result of crawling Wikipedia for Indonesian and Open Super-large Crawled ALMAnaCH coRpus (OSCAR) for Esperanto. The size of the token dictionary used in the test used approximately 30,000 sub tokens in both the SentencePiece and byte-level byte pair encoding methods (ByteLevelBPE). The test was carried out with the learning rates of 1e-5 and 5e-5 for both languages in accordance with the reference from the bidirectional encoder representations from transformers (BERT) paper. As shown in the final result of this study, the ALBERT and RoBERTa models in Esperanto showed the results of the loss calculation that were not much different. This showed that the RoBERTa model was better to implement an Indonesian question and answer system.
Deteksi Covid-19 umumnya menggunakan tes laboratorium dengan metode RT-PCR untuk mendapatkan hasil yang akurat. Sayangnya, tes ini membutuhkan waktu yang cukup lama yaitu sekitar 24 jam untuk mendapatkan hasil. Selain menggunakan RT-PCR, beberapa penelitian menunjukkan bahwa deteksi menggunakan citra sinar-X menunjukkan hasil yang cukup akurat dengan waktu prediksi yang lebih cepat. Citra sinar-X yang didominasi warna dalam jangkauan grayscale dapat dikatakan memiliki karakteristik yang berbeda jika dibandingkan dengan citra secara umum, sehingga dalam penelitian ini eksperimen dilakukan terhadap pelatihan untuk kasus klasifikasi citra sinar-X dengan melatih model dari awal (scratch). Namun seringkali model yang dilatih tanpa adanya pretraining menyebabkan model tidak dapat mencapai performa yang cukup baik. Salah satu bentuk metode pretraining yang dapat digunakan adalah penggunaan autoencoder sebagai model untuk rekonstruksi citra. Dalam penelitian ini pelatihan menggunakan pretraining autoencoder menghasilkan akurasi terbaik sebesar 81.78% dengan tambahan metode CutMix, color manipulation, dan rotation sebagai augmentasi. Kami juga menunjukkan bahwa penambahan pretraining autoencoder secara konsisten dapat meningkatkan akurasi hingga 2.58% pada model yang dilatih dari awal (scratch).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.