Classification is one method of data analysis in data mining that is used to form a model in order to describe the appropriate data class or model that predicts data trends. The Usage of classification has been applied in various areas, including in health areas. One of the classification methods used is Naive Bayes. This study aims to predict the weight of infants in maternal hypertensive and nonhypertensive conditions with Naive Bayes method. Data were taken as many as 219 data from pregnant women based on the medical record in Obstetrics and Gynecology of Muhammadiyah Palembang Hospital from January 1017 until September 2017. Data is divided into two groups, 188 for training data and 31 data for testing data. The performance data analysis was using WEKA and the results showed that the Naive Bayes’s accuracy is 80.372%. the accuracy score means Naive Bayes works well to predict the weight of infants in maternal hypertensive and nonhypertensive mothers. The result is expected to be a reference for others research by comparing it with other classification methods and incorporating other factors in pregnancy and multiple births or other factors.
The University of California Irvine Heart disease dataset had missing data on several attributes. The missing data can loss the important information of the attributes, but it cannot be deleted immediately on dataset. To handle missing data, there are several ways including deletion, imputation by mean, mode, or with prediction methods. In this study, the missing data were handled by deletion technique if the attribute had more than 70% missing data. Otherwise, it were handled by mean and mode method to impute missing data that had missing data less or equal 1%. The artificial neural network was used to handle the attribute that had missing data more than 1%. The results of the techniques and methods used to handle missing data were measured based on the performance results of the classification method on data that has been handled the problem of missing data. In this study the classification method used is Artificial Neural Network, Naïve Bayes, Support Vector Machine, and K-Nearest Neighbor. The performance results of classification methods without handling missing data were compared with the performance results of classification methods after imputation missing data on dataset for accuracy, sensitivity, specificity and ROC. In addition, the comparison of the Mean Squared Error results was also used to see how close the predicted label in the classification was to the original label. The lowest Mean Squared Error wasobtained by Artificial Neural Network, which means that the Artificial Neural Network worked very well on dataset that has been handled missing data compared to other methods. The result of accuracy, specificity, sensitivity in each classification method showed that imputation missing data could increase the performance of classification, especially for the Artificial Neural Network method.
Psychological tests are an important need in various spaces of human life. Not only related to matters of a clinical nature, psychological tests are also used in the workspace. Psychological tests are carried out as an effort to find out by knowing more about a person's personality. One of the methods used by psychologists is Kraepelin to get personality types. In practice, psychological tests in understanding an object, namely humans with all their attitudes and behavior, still use the old way. Psychological tests still use sheets or series of questions given to related objects and the calculation of results or assessments is still done manually. Errors in the assessment will affect the results so that it will lead to inappropriate perceptions. Making questions requires time and high accuracy, so the system is built using the Linear Congruential Method (LCM). LCM method is used to generate random numbers with better access time performance in terms of complexity and optimality. The 20 minute test application consists of 40 columns and 60 rows of questions with a time limit of 30 seconds for each column. The website-based Kraepelin test application can support all related parties, both the test organizers and test takers, to get real-time and accurate test results by applying the Kraepelin test using the LCM method. The implementation of the Kraepelin test is in accordance with the purpose of using the test, namely as a tool to measure aptitude (speed, accuracy, stability and work endurance). Based on the test results, the calculation of the score using the system will be faster with a calculation time of 2 seconds while the manual calculation is 5 minutes.
Naïve Bayes Classifier is one of the classification algorithms in Data Mining with a good processing speed and a fairly high level of accuracy. In the classification process the Naïve Bayes Classifier adopts the Bayesian theorem to map a data against a class by taking into account the probability of the attribute data, but because the Naïve Bayes Classifier makes probability the basis for its calculations, it is certainly very risk if it is wrong. If one class that is contained in the attribute has a value of 0, this will reduce the level of accuracy of the classification process carried out by the Naïve Bayes Classifier algorithm itself, therefore in this study the Laplacian Correction technique is used as an alternative to fix the problems that are owned by the Naïve Bayes Classifier Algorithm. The result of this research is that the Laplace Correction technique has succeeded in improving the performance of the Naïve Bayes Classifier by fixing the 0 value for each attribute. The level of accuracy that is owned by the Naïve Bayes Classifier after experiencing improvements with the Laplacian correction technique is 94.44%.
Additional layers to the U-Net architecture leads to additional parameters and network complexity. The Visual Geometry Group (VGG) architecture with 16 backbones can overcome the problem with small convolutions. Dense Connected (DenseNet) can be used to avoid excessive feature learning in VGG by directly connecting each layer using input from the previous feature map. Adding a Dropout layer can protect DenseNet from Overfitting problems. This study proposes a VG-DropDNet architecture that combines VGG, DenseNet, and U-Net with a dropout layer in blood vessels retinal segmentation. VG-DropDNet is applied to Digital Retina Image for Vessel Extraction (DRIVE) and Retina Structured Analysis (STARE) datasets. The results on DRIVE give great accuracy of 95.36%, sensitivity of 79.74% and specificity of 97.61%. The F1-score on DRIVE of 0.8144 indicates that VG-DropDNet has great precision and recall. The IoU result is 68.70. It concludes that the resulting image of VG-DropDNet has a great resemblance to its ground truth. The results on STARE are excellent for accuracy of 98.56%, sensitivity of 91.24%, specificity of 92.99% and IoU of 86.90%. The results of the VGG-DropDNet on STARE show that the proposed method is excellent and robust for blood vessels retinal segmentation. The Cohen's Kappa coefficient obtained by VG-DropDNet at DRIVe is 0.8386 and at STARE is 0.98, it explains that the VG-DropDNet results are consistent and precise in both datasets. The results on various datasets indicate that VG-DropDnet is effective, robust and stable in retinal image blood vessel segmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.