The number of motorcyclists in Indonesia was 105.15 million in 2016. It made the Indonesian government difficult to monitor motorcyclists on the highways. Dash cam could be used as the alternative tool to detect motorcyclists when given the intelligence. One of the typical drawbacks in detecting objects is complex and varied feature. A convolutional neural networks (CNN) that was capable of detecting motorcyclists was proposed. CNN successfully classified the ship object with f1score of 0.94. Sliding window and heat map were used in this paper to search the localization and region of motorcyclists. Two experiments had been done in this paper. The goal of this paper was to set the best combination of CNN architecture and parameter. The first experiment consisted of three trained weights while the second experiment consisted of one trained weight. Weight peformances against test data in experiment 1 and experiment 2 were measured using f1-score of 0.977, 0.988, 0.989, and 0.986, respectively. From the experimental results using the sliding window, experiment 2 had a lower error rate to predict motorcyclists than experiment 1 because the training data on experiment 1 contained more and various images.
Semakin berkembang motif ukiran, semakin beragam bentuk dan variasinya. Hal ini menyulitkan dalam menentukan suatu ukiran bermotif Jepara. Pada makalah ini, metode transfer learning dengan FC yang dikembangkan dimanfaatkan untuk mengidentifikasi motif khas Jepara pada suatu ukiran. Dataset dibedakan menjadi tiga color space, yaitu LUV, RGB, dan YcrCb. Selain itu, sliding window, non-max suppression, dan heat maps dimanfaatkan untuk proses penelusuran area objek ukiran dan pengidentifikasian motif Jepara. Hasil pengujian dari semua bobot menunjukkan bahwa Xception pada klasifikasi motif Jepara memiliki nilai akurasi tertinggi, yaitu 0,95, 0,95, dan 0,94 untuk masing-masing dataset color space LUV, RGB, dan YCrCb. Namun, ketika semua bobot model tersebut diterapkan pada sistem identifikasi motif Jepara, ResNet50 mampu mengungguli semua jaringan dengan nilai persentase identifikasi motif sebesar 84%, 79%, dan 80%, untuk masing-masing color space LUV, RGB, dan YCrCb. Hasil ini membuktikan bahwa sistem mampu membantu dalam proses menentukan suatu ukiran, termasuk ke dalam ukiran Jepara atau bukan, dengan mengidentifikasi motif-motif khas Jepara yang terdapat dalam ukiran.
Babies are still unable to inform the pain they experience, therefore, babies cry when experiencing pain. With the rapid development of computer vision technologies, in the last few years, many researchers have tried to recognize pain from babies expressions using machine learning and image processing. In this paper, a research using Deep Convolution Neural Network (DCNN) Autoencoder and Long-Short Term Memory (LSTM) Network is conducted to detect cry and pain level from baby facial expression on video. DCNN Autoencoder is used to extract latent features from a single frame of baby face. Sequences of extracted latent features are then fed to LSTM so the pain level and cry can be recognized. Face detection and face landmark detection is also used to frontalize baby facial image before it is processed by DCNN Autoencoder. From the testing on DCNN autoencoder, the result shows that the best architecture used three convolutional layers and three transposed convolutional layers. As for the LSTM classifier, the best model is using four frame sequences. Intisari-Bayi belum dapat menginformasikan rasa nyeri yang dialami, karena itu bayi menangis saat mengalami nyeri. Dengan semakin berkembangnya teknologi visi komputer, beberapa tahun terakhir muncul beberapa penelitian yang mencoba mengenali nyeri pada tangis bayi memanfaatkan machine learning dan pengolahan citra. Dalam makalah ini diteliti pemanfaatan Deep Convolution Neural Network (DCNN) Autoencoder dan Long-Short Term Memory (LSTM) Network untuk deteksi tangis dan tingkat nyeri pada video wajah bayi. DCNN Autoencoder berguna untuk melakukan ekstraksi latent feature dari satu frame wajah bayi. Deretan latent feature ini kemudian diumpankan ke LSTM untuk dikenali tangis dan tingkat nyerinya. Selain itu, digunakan juga teknik face detection dan face landmark detection untuk meluruskan/menegakkan wajah bayi sebelum diproses oleh DCNN autoencoder. Dari pengujian DCNN autoencoder, didapatkan hasil terbaik dengan menggunakan tiga convolutional layer dan tiga transposed convolutional layer. Sedangkan untuk LSTM classifier, model terbaik didapatkan dalam percobaan dengan empat runtun frame.
In this paper, we develop a pattern recognition system to detect weather an infant is crying or not just by using his facial feature. The system must first detect the baby face by using the Haar-like feature, then find the facial component using trained active shape model (ASM). The extracted feature then fed to Chaotic Neural Network Classifier. We designed the system so that when the testing pattern is not a crying baby the system will be chaotic, but when the testing pattern is a crying baby face the system must switch to being periodic. Predicting whether a baby is crying based only on facial feature is still a challenging problem for existing computer vision system. Although crying baby can be detected easier using sound, most CCTV don't have microphone to record the sound. This is the reason why we only use facial feature. Chaotic Neural Network (CNN) has been introduced for pattern recognition since 1989. But only recently that CNN receive a great attention from computer vision people. The CNN that we use in this paper is the Ideal Modified Adachi Neural Network (Ideal-M-AdNN). Experiments show that Ideal-M-AdNN with ASM feature able to detect crying baby face with accuracy up to 93%. But nevertheless this experiment is still novel and only limited to still image.Index Terms-Chaotic neural networks, active shape model, chaotic pattern recognition, ideal modified adachi neural network, infant facial cry detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.