2022
DOI: 10.56705/ijodas.v3i1.42
|View full text |Cite
|
Sign up to set email alerts
|

Analisis Performa Algoritma Stochastic Gradient Descent (SGD) Dalam Mengklasifikasi Tahu Berformalin

Abstract: Tahu berformalin adalah salah satu jenis makanan yang sering mengandung bahan-bahan kimia yang dapat mengawetkan daripada tahu tanpa formalin. Pada tahu berformalin dapat memberikan tekstur lebih kenyal dan berwarna putih bersih. Penelitian ini bertujuan untuk mengklasifikasikan tahu berformalin dan tahu tidak berformalin. Pada paper ini menggunakan algoritma Stochastic Gradient Descent atau dalam penerapannya lebih dikenal dengan SGD Classifier yang merupakan bagian dari algoritma machine learning untuk klasi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 9 publications
0
3
0
1
Order By: Relevance
“…The model was trained using optimization algorithms Stochastic Gradient Descent (SGD) [45] and Cross Entropy Loss Function (CLF) [46]. The experimental data utilized in this research from the rolling bearing condition monitoring bench at the University of Paderborn(UPB) [47], Germany.…”
Section: Bearing Fault Diagnosis Methodsmentioning
confidence: 99%
“…The model was trained using optimization algorithms Stochastic Gradient Descent (SGD) [45] and Cross Entropy Loss Function (CLF) [46]. The experimental data utilized in this research from the rolling bearing condition monitoring bench at the University of Paderborn(UPB) [47], Germany.…”
Section: Bearing Fault Diagnosis Methodsmentioning
confidence: 99%
“…SGDC is a simple and efficient approach to classifying linearly using discriminatory learning. This method is an iterative (re)optimization algorithm that is useful for finding the minimum function point that can be derived [17,18]. At the beginning of the algorithm, the process begins by making guesses.…”
Section: Sgd Classifiermentioning
confidence: 99%
“…Updates are performed simultaneously for all j values = 0, ..., n. Variable Ξ± is the learning rate that regulates how much the value renewal is. The equation for the value of J(ΞΈ) can be seen in Equation ( 3), where L is the loss function used in the training data (x1, y1)...(xn, yx), and R is the regularization or penalty for model complexity [18].…”
Section: Sgd Classifiermentioning
confidence: 99%
“…Pembaruan dilakukan secara 113 bersamaan untuk semua nilai j = 0, ..., 𝑛. Parameter 𝛼 adalah learning rate yang mengatur sejauh mana pembaruan nilai dilakukan. Persamaan nilai 𝐽(πœƒ) dapat ditemukan dalam Persamaan (5), dimana L adalah fungsi kerugian yang digunakan pada data pelatihan (π‘₯ 1 , 𝑦 1 ), … (π‘₯ 𝑛 , 𝑦 π‘₯ ) dan 𝑅 adalah regularisasi atau hukuman terhadap kompleksitas model [23].…”
Section: Proses Identifikasiunclassified