Tanaman jarak pagar merupakan tanaman multi fungsi yang memiliki banyak manfaat dari daun hingga buah. Tanaman jarak pagar sering digunakan untuk produk kecantikan hingga pengganti biodiesel. Penyakit yang menyerang tanaman jarak pagar dapat mengganggu hasil dari tanaman jarak pagar. Kurangnya pakar dibidang ini dan pengetahuan yang dimiliki petani menyebabkan sesuatu yang buruk. Persoalan ini dapat diselesaikan dengan metode Deep Learning. Metode Deep Learning yang digunakan adalah H2O. H2O digunakan karena dapat memberikan hasil komputasi yang cepat dan bisa memberikan akurasi yang baik. Pada penelitian ini bisa kita lihat bahwa H2O memberikan akurasi rata-rata maksimal sebesar 96,066% dengan parameter uji kombinasi data latih dan data uji 60:40, menggunakan satu layer dan jumlah epoch sebanyak 100. Pada penelitian ini membuktikan bahwa H2O bisa digunakan untuk identifikasi penyakit tanaman jarak pagar
Nowadays, software is very influential on various sectors of life, both to solve business needs, as well as personal needs. To have a Software with high quality, testing is needed to avoid software defect. Research on software defects involving Machine Learning is currently being carried out by many researchers. This method contains one important step, which is called feature selection. In this study, researchers conducted a feature selection based on the software metric category to determine the level of accuracy of the prediction of software defects by utilizing 13 (thirteen) datasets from NASA MDP namely CM1, JM1, KC1, KC3, KC4, MC1, MC2, MW1, PC1, PC2, PC3, PC4, and PC5. To classify, the researchers involved 5 (five) classifiers, namely Naive Bayes, Decision Trees, Random Forests, K-Nearest Neighbor, and Support Vector Machines. The research result shows that each attribure on software metric categories has effect on each dataset. Naive Bayes Algorithm and Random Forest Algorithm can give better performance than other algorithm in classifieng software defect with feature selection based on metrics. On the other hand, the best metrics category on each classifier algorithm is metric Misc. From average AUC value, it can be concluded that metrics category which can give best performance is metric LoC, followed by metric Misc. Both categories have achieved highest AUC value in Random Forest classifier.
Researchers have collected Twitter data to study a wide range of topics, one of which is a natural disaster. A social network sensor was developed in existing research to filter natural disaster information from direct eyewitnesses, none eyewitnesses, and non-natural disaster information. It can be used as a tool for early warning or monitoring when natural disasters occur. The main component of the social network sensor is the text tweet classification. Similar to text classification research in general, the challenge is the feature extraction method to convert Twitter text into structured data. The strategy commonly used is vector space representation. However, it has the potential to produce high dimension data. This research focuses on the feature extraction method to resolve high dimension data issues. We propose a hybrid approach of word2vec-based and lexicon-based feature extraction to produce new features. The Experiment result shows that the proposed method has fewer features and improves classification performance with an average AUC value of 0.84, and the number of features is 150. The value is obtained by using only the word2vec-based method. In the end, this research shows that lexicon-based did not influence the improvement in the performance of social network sensor predictions in natural disasters. HIGHLIGHTS Implementation of text classification is generally only used to perform sentiment analysis, it is still rare to use it to perform text classification for use in determining direct eyewitnesses in cases of natural disasters One of the common problems in text mining research is the extracted features from the vector space representation method generate high dimension data A hybrid approach of word2vec-based and lexicon-based feature extraction experiment was conducted in order to find a method that can generate new features with low dimensions and also improve the classification performance GRAPHICAL ABSTRACT
<p><em>One object counting implementation is counting the number of road users from video data sources obtained from CCTV streaming. Video processing on CCTV is usually done on the server side by sending video data. If the need is only to determine the density of traffic, then the method is considered too expensive to be implemented because of the cost of internet connection and bandwidth that must be spent. The solution is to use a small computing device that can process the video first, and the calculation results are sent to the server regularly. In this study, a comparison between the Tensorflow Object Counting learning algorithm and the MOG2 Background Subtractor image processing algorithm with the aim to determine the accuracy of the calculation. The result is known that better accuracy is given by the MOG2 Background Subtractor technique and also the process is carried out using only a small percentage of the amount of memory and processor compared to the Tensorflow Object Counting technique. MOG2 Background Substractor technique is expected to be used on devices that have small data sources</em></p><p><em><strong>Keywords</strong></em><em><strong> </strong></em><em>: </em><em>Object Counting, Tensorflow</em><em>, MOG2 Background Substractor</em></p><p>Salah satu implementasi object counting adalah menghitung jumlah pengguna jalan dari sumber data video yang didapat dari streaming CCTV. Pemprosesan video pada CCTV biasanya dilakukan disisi server dengan mengirimkan data video. Jika keperluannya hanya untuk mengetahui kepadatan lalu lintas, maka cara tersebut dinilai terlalu mahal untuk diimplementasikan karena biaya koneksi internet dan bandwidth yang harus dikeluarkan. Pemecahannya adalah menggunakan perangkat komputasi kecil yang dapat memproses video tersebut terlebih dahulu, dan hasil perhitungannya dikirimkan ke server secara berkala. Pada penelitian ini dilakukan perbandingan antara algoritma pembelajaran Tensorflow Object Counting dan algoritma image processing MOG2 Background Substractor dengan tujuan untuk mengetahui akurasi penghitungan. Hasilnya diketahui akurasi yang lebih baik diberikan oleh teknik MOG2 Background Substractor dan juga proses yang dilakukan hanya menggunakan prosentase jumlah memori dan prosessor yang kecil dibandingkan teknik Tensorflow Object Counting. Sehingga teknik MOG2 Background Substractor ini diharapkan dapat digunakan pada perangkat yang memiliki sumber data kecil. <br /> <br /><strong>Kata kunci</strong> : Object Counting, Tensorflow, MOG2 Background Substractor.</p><p><em><br /></em></p>
<p><em>Teeth are one of the tools in the framework related to the human stomach which fills as a food destroyer for simple processing. Diseases that attack teeth can withstand this action and cannot be distinguished quickly by young dental specialists. This problem can be solved by methods in the field of technology. The algorithm that can be used is FIS Tsukamoto in classification. Optimization of the membership function at FIS Tsukamoto is needed to improve accuracy. Optimization of FIS Tsukamoto membership function using Simulated Annealing produced the highest accuracy at 92.5% of the 100 test data.</em></p><p><em><strong>Keywords</strong></em><em>: Simulated Annealing; FIS Tsukamoto, Dental Disease</em><em>, Optimization</em> </p><p><em>Gigi adalah salah satu alat dalam kerangka terkait perut manusia yang mengisi sebagai penghancur makanan untuk pemrosesan sederhana. Penyakit yang menyerang gigi dapat menahan tindakan ini dan tidak dapat dibedakan dengan cepat oleh dokter muda spesialis gigi. Masalah ini dapat diselesaikan dengan metode di bidang teknologi. Algoritma yang bisa digunakan yaitu FIS Tsukamoto dalam melakukan klasifikasi. Optimasi fungsi keanggotaan pada FIS Tsukamoto diperlukan untuk meningkatkan akurasi. Optimasi fungsi keanggotaan FIS Tsukamoto menggunakan Simulated Annealing menghasilkan akurasi paling tinggi yaitu 92,5% dari 100 data uji.</em></p><p><em><strong>Kata kunci</strong></em><em>: Simulated Annealing; FIS Tsukamoto, </em><em>Penyakit Gigi, Optimisasi</em></p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.