2021
DOI: 10.32493/informatika.v5i4.7575
|View full text |Cite
|
Sign up to set email alerts
|

Perbandingan Kinerja Algoritma Klasifikasi Naive Bayes, Support Vector Machine (SVM), dan Random Forest untuk Prediksi Ketidakhadiran di Tempat Kerja

Abstract: Absence is a problem for the company. Absenteeism is defined as a task that is assigned to an individual, but the individual cannot complete the task when he is not present. Absence from work is influenced by many factors, including mismatched working hours, job demand and other factors such as serious accidents / illness, low morale, poor working conditions, boredom, lack of supervision, personal problems, insufficient nutrition, transportation problems, stress, workload, and dissatisfaction. The purpose of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 9 publications
0
6
0
2
Order By: Relevance
“…Research (Susilowati et al, 2015) explained the meaning of SVM as a learning machine method that works with the principle of Structural Risk Minimization (SRM), aiming to find the best hyperplane that can separate two classes in input space. The following is equation of the Support Vector Machine is as follows (Nalatissifa et al, 2021). 𝑦 𝑖 𝐾(𝑥, 𝑥 𝑖 ) + 𝑏 (3) Feature space is a conversion method from input (dot product) that can only separate linear data into high-dimensional forms (feature space).…”
Section: Methodsmentioning
confidence: 99%
“…Research (Susilowati et al, 2015) explained the meaning of SVM as a learning machine method that works with the principle of Structural Risk Minimization (SRM), aiming to find the best hyperplane that can separate two classes in input space. The following is equation of the Support Vector Machine is as follows (Nalatissifa et al, 2021). 𝑦 𝑖 𝐾(𝑥, 𝑥 𝑖 ) + 𝑏 (3) Feature space is a conversion method from input (dot product) that can only separate linear data into high-dimensional forms (feature space).…”
Section: Methodsmentioning
confidence: 99%
“…a) Naive Bayes One of the simplest probabilistic classification techniques based on the Bayes theorem, Naive Bayes is a popular classification technique and one of the top 10 data mining algorithms [13]. To make calculations simpler, Naive Bayes assumes that the impact on attribute values for specific classes is independent of the impact on other attribute values [14]. b) Support vector machine A guided learning technique for classification is the support vector machine.…”
Section: Literature Reviewmentioning
confidence: 99%
“…SVM is a good algorithm for data classification [19] with the principle of finding the best hyperplane that serves as a separator of two data classes [20]. The best hyperplane is determined by measuring the hyperplane margin and finding its maximum point, margin is the distance between the hyperplane and the nearest point of each class and this closest point is called the support vector [21].…”
Section: Data Processing 1) Support Vector Machine (Svm)mentioning
confidence: 99%