Sistem pengenalan tulisan tangan huruf hijaiyah diperlukan untuk melakukan koreksi otomatis terhadap seseorang yang tengah belajar menulisnya. Dalam pengimplementasiannya terdapat beberapa tantangan. Tantangan seperti banyaknya bentuk variasi tulisan tangan huruf hijaiyah, pemilihan arsitektur yang tepat, dan banyak data pelatihan yang dibutuhkan agar sistem dapat memprediksi secara akurat. Convolutional Neural Network (CNN) adalah salah satu algoritma deep learning yang efektif dalam mengolah citra yang dapat dilatih baik secara supervised learning maupun unsupervised learning. Model CNN dilatih menggunakan dataset Hijaiyah1SKFI. Dataset tersebut terdiri dari 2100 data dengan 30 kelas yakni huruf alif hingga ya yang ditulis oleh 4 orang berbeda dengan 80% digunakan sebagai data latih dan 20% adalah data tes. Dalam makalah ini dilakukan optimisasi berupa augmentasi data karena data yang tidak banyak sehingga dengan data yang sedikit maka variasi data pelatihan akan bertambah. Arsitektur yang diusung di makalah ini bernama SIP-Net mendapatkan akurasi pada data uji sebesar 99.7%.
The paper presents the intelligent surveillance robotic control techniques via web and mobile via an Internet of Things (IoT) connection. The robot is equipped with a Kinect Xbox 360 camera and a Deep Learning algorithm for recognizing objects in front of it. The Deep Learning algorithm used is OpenCV's Deep Neural Network (DNN). The intelligent surveillance robot in this study was named BNU 4.0. The brain controlling this robot is the NodeMCU V3 microcontroller. Electronic board based on the ESP8266 chip. With this chip, NodeMCU V3 can connect to the cloud Internet of Things (IoT). Cloud IoT used in this research is cloudmqtt (https://www.cloudmqtt.com). With the Arduino program embedded in the NodeMCU V3 microcontroller, it can then run the robot control program via web and mobile. The mobile robot control program uses the Android MQTT IoT Application Panel.
In this research, the Robust Regression method used for face recognition tested its performance with illumination variations on the training dataset. Experiments were carried out using Cropped Yale Face Database B. By using this standard face database, generally the data for the training process used all images in subset 1 and the testing process was carried out on all images in other subsets. The training process in this method is done to create a regressor or predictor. In this research experiment, training data use each subset. Also, this research experiment will also combine several images from all subsets. The experimental results show that the use of subset 1 images as training data turns out to produce the lowest facial recognition performance where the accuracy is 90.00%. The use of other subsets as training datasets can deliver better facial recognition performance. The highest facial recognition performance is achieved through the use of combined images of sample images from all subsets, where accuracy reaches 99.81%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.