2020
DOI: 10.3390/w12051281
|View full text |Cite
|
Sign up to set email alerts
|

Comparing Activation Functions in Modeling Shoreline Variation Using Multilayer Perceptron Neural Network

Abstract: The study has modeled shoreline changes by using a multilayer perceptron (MLP) neural network with the data collected from five beaches in southern Taiwan. The data included aerial survey maps of the Forestry Bureau for years 1982, 2002, and 2006, which served as predictors, while the unmanned aerial vehicle (UAV) surveyed data of 2019 served as the respondent. The MLP was configured using five different activation functions with the aim of evaluating their significance. These functions were Identity, Tahn, Lo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 26 publications
(8 citation statements)
references
References 34 publications
0
7
0
1
Order By: Relevance
“…Every neuron performs aggregation on its weighted inputs and yields an output through an activation function. The most commonly used activation functions are the linear, the logistic and the hyperbolic tangent activation function [24]. The output value of the j-th neuron (o j ) is given by the equations as described by Dedecker et al [25]:…”
Section: Preliminaries On Ann and Model Constructionmentioning
confidence: 99%
“…Every neuron performs aggregation on its weighted inputs and yields an output through an activation function. The most commonly used activation functions are the linear, the logistic and the hyperbolic tangent activation function [24]. The output value of the j-th neuron (o j ) is given by the equations as described by Dedecker et al [25]:…”
Section: Preliminaries On Ann and Model Constructionmentioning
confidence: 99%
“…Recently, one of the most common artificial neural networks (ANNs), the multilayer perceptron (MLP) developed by Pal and Mitra (1992) , has been broadly used for modeling and predicting complex traits, such as yield, in different breeding programs ( Geetha, 2020 ). MLP can be considered as a non-linear computational method employed for various tasks such as classification and regression of complex systems ( Chen and Wang, 2020 ; Hesami et al, 2020b ). This algorithm is able to detect the connection and relationship between the input and output (target) variables and quantify the inherent knowledge existing in the datasets ( Ghorbani et al, 2016 ; Hesami et al, 2020b ).…”
Section: Introductionmentioning
confidence: 99%
“…Dalam algoritma MLP, backpropagation digunakan untuk menurunkan nilai error antara lapisan output dan kelas target [4], dan algoritma ini juga memanfaatkan fungsi aktivasi pada hidden layer untuk mengurangi nilai error dari output yang dihasilkan masing-masing neuron [5]. Fungsi aktivasi ReLu [6] dan Tanh [7] merupakan dua fungsi aktivasi yang cukup populer dan banyak digunakan pada arsitektur MLP, karena dapat menghasilkan waktu pelatihan data yang lebih cepat. Penelitian ini bertujuan untuk menganalisa hasil klasifikasi algoritma MLP dengan menggunakan kedua fungsi aktivasi tersebut dan variasi jumlah hidden layer yang berbeda.…”
Section: Pendahuluanunclassified