1995
DOI: 10.4287/jsprs.34.4_22
|View full text |Cite
|
Sign up to set email alerts
|

Optimized Neural Network for Spatial Land cover Classification with the aid of Co occurrence Matrix.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
3
0
1

Year Published

1998
1998
2023
2023

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 6 publications
1
3
0
1
Order By: Relevance
“…This can be seen from the shorter processing time and lower error, and by using test site data, a high classification accuracy is obtained, namely Overall Accuracy of 89% and Kappa Coefficient of 0.88 on the architectural model 1 hidden layer. This is in line with the results of previous studies (Kushardono, Fukue, Shimoda, & Sakata, 1995;Li, Zhang, & Huang, 2022;Silva, Xavier, da Silva, & Santos, 2020) that to get fast convergence in training using backpropagation it takes the number of neurons and the number of layers in the hidden layer that is balanced with the number of neurons in the output layer, in this study the architecture of 1 hidden layer was obtained, with 16 neurons is the best for 10 neurons in the output layer to accommodate 10 class categories with 4 neurons in the input layer.…”
Section: Optimization Of the Neural Network Layersupporting
confidence: 88%
See 1 more Smart Citation
“…This can be seen from the shorter processing time and lower error, and by using test site data, a high classification accuracy is obtained, namely Overall Accuracy of 89% and Kappa Coefficient of 0.88 on the architectural model 1 hidden layer. This is in line with the results of previous studies (Kushardono, Fukue, Shimoda, & Sakata, 1995;Li, Zhang, & Huang, 2022;Silva, Xavier, da Silva, & Santos, 2020) that to get fast convergence in training using backpropagation it takes the number of neurons and the number of layers in the hidden layer that is balanced with the number of neurons in the output layer, in this study the architecture of 1 hidden layer was obtained, with 16 neurons is the best for 10 neurons in the output layer to accommodate 10 class categories with 4 neurons in the input layer.…”
Section: Optimization Of the Neural Network Layersupporting
confidence: 88%
“…The backpropagation-based training process is carried out until the RMS error is lower than 0.01 or a maximum of 10,000 iterations. Based on previous experience, including from research by (Kushardono, Fukue, Shimoda, & Sakata, 1995) that neural network learning with back propagation is successful if there are no errors between the output layer output and the learning teacher RMS error is less than 0.1 and in research it is used 0.01 to further ensure that learning is perfectly convergent. While 10,000 iterations are also based on the research experience that if there are more learning iterations than that it is a failure or the RMS error is still high and cannot converge.Where in this process the weight and bias factors for each neuron in each iteration are corrected based on the RMS error which is calculated from the difference between the output of the neurons in the output layer and the wanted output according to the training data class.…”
Section: Methodsmentioning
confidence: 99%
“…Our ANN classification parameters was chosen based on previous studies conducted by (G. M. Foody & Arora, 1997). Based on Kushardono et al (1995), the optimum results for ANN can be achieved when the numbers of hidden layers kept at minimumum, with hidden neuron layer twice the amount of classes being investigated. The training rate and training momentum should be on the opposite spectrum.…”
Section: Baresoilmentioning
confidence: 99%
“…(G. M. Foody & Arora, 1997) further agreed with previous founding and stating that increasing the size and complexity of the network will incur extra computational and ground data cost, with only a slight increase on classification accuracy. The more hidden layers being used, the more interaction between neurons and in between layers, where the process of finding the offset and gain value of each neuron is more complicated (Kushardono et al, 1995). We, therefore, decided to use only a single hidden layer to optimize computational time and classification accuracy.…”
Section: Baresoilmentioning
confidence: 99%
“…Tingginya akurasi hasil klasifikasi pada penggunaan model 242 kanal, selain jumlah neuron lapis masukan yang berhubungan dengan neuron lapis tengah menjadi semakin banyak, dimana pada penelitian Kushardono et al (1995a) jumlah neuron pada lapisan ini semakin banyak semakin memudahkan pengklasifikasi neural network dalam membedakan kelas, terutama pada klasifikasi dengan jumlah kelas penutup lahannya yang banyak. Selain itu dengan model 163 kanal yang tidak mempergunakan informasi pada panjang 1235nm hingga 1356nm dan 1721nm hingga 1900nm, sebagaimana nampak pada Gambar 3-1 pada daerah panjang gelombang tersebut masih terdapat sebagian informasi kelas lahan utamanya kawah dan magma, yang kemungkinan informasi ini dijadikan pembeda utama dalam klasifikasi non parametrik ini pada neuron yang terkait, sehingga menyebabkan turunnya akurasi hasil kelas lahan ini menjadi 88% dan 44% dibanding dengan model 242 kanal yang akurasinya dapat mencapai 100% dan 46%.…”
Section: Metodeunclassified