2022
DOI: 10.26798/jiko.v6i2.600
|View full text |Cite
|
Sign up to set email alerts
|

Komparasi Fungsi Aktivasi Relu Dan Tanh Pada Multilayer Perceptron

Abstract: Neural network merupakan metode yang populer digunakan dalam penelitian di bidang machine, dan fungsi aktivasi, khususnya ReLu dan Tanh, memiliki fungsi yang sangat penting pada neural network, untuk meminimalisir nilai error antara lapisan output dan kelas target. Dengan variasi jumlah hidden layer, serta jumlah neuron pada masing-masing hidden layer yang berbeda, penelitian ini menganalisa 8 model untuk mengklasifikasikan dataset Titanic's Survivor. Diperoleh hasil bahwa fungsi ReLu memiliki kinerja yang leb… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 8 publications
0
7
0
1
Order By: Relevance
“…The study uses two types of activation functions, namely Rectified Linear Unit (ReLU) and Adaptive Moment Estimation (ADAM). According to a study, the model using the ReLu activation function with 4 hidden layers and 50 neurons in each hidden layer had the highest accuracy score [10]. This shows that ReLu is able to do a good job as an activation function.…”
Section: Deep Learning Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…The study uses two types of activation functions, namely Rectified Linear Unit (ReLU) and Adaptive Moment Estimation (ADAM). According to a study, the model using the ReLu activation function with 4 hidden layers and 50 neurons in each hidden layer had the highest accuracy score [10]. This shows that ReLu is able to do a good job as an activation function.…”
Section: Deep Learning Neural Networkmentioning
confidence: 99%
“…It is commonly used in research methods involving forecasting, including those related to rainfall forecasting. The RMSSE is a scaled version of the RMSE (Root Mean Squared Error) that takes into account the seasonality and the actual variability in the data, the equation of RMSSE can be written as in Equation ( 9) [16]: (10) Where 𝑦 𝑡 is the actual future value of the examined time series at point t, 𝑦 ̂𝑡 is the forecast by the method under evaluation, đť‘› is the length of the training sample (number of historical observation), and â„Ž is the forecasting horizon.…”
Section: Metric Evaluationmentioning
confidence: 99%
“…The ReLU activation function is used on hidden layers, while the Sigmoid activation function is used on the model output. The use of this activation function requires all data to have a positive value because the ReLU function returns a value of 0 for a negative input [15]. Sigmoid is used to convert the original range of values into a range between 0 and 1 [16].…”
Section: Ann Modelmentioning
confidence: 99%
“…One of the most well-known and proven methods useful in modeling complex hydrological processes or nonlinear systems such as sediment transport is the Artificial Neural Network (ANN) [8]. Neural Network has an effective performance in transforming data so that it can be processed more easily [9]. ANN is a data-driven method that uses experience learned from historical data to estimate desired outputs.…”
Section: Introductionmentioning
confidence: 99%
“…The Multilayer Perceptron (MLP) algorithm, characterized by an input layer, two or more hidden layers, and an output layer, exhibits resilience to data noise and is capable of solving complex data problems effectively due to its architecture with multiple hidden layers (Pardede & Hayadi, 2023). This algorithm utilizes activation functions, such as ReLu, in the hidden layer to minimize the error values of the output produced by each neuron (Firmansyah & Hayadi, 2022). When dealing with complex data, MLPs are susceptible to overfitting, where the model excessively learns details from the training data and fails to generalize well to test or new data (Dovbnych & Plechawska-WĂłjcik, 2021).…”
Section: Introductionmentioning
confidence: 99%