2003
DOI: 10.1016/s1537-5110(03)00083-7
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of a Neural Network and a Non-parametric Classifier for Grain Kernel Identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
15
0

Year Published

2005
2005
2020
2020

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 49 publications
(15 citation statements)
references
References 12 publications
0
15
0
Order By: Relevance
“…Model 6 was found by inputting 69 variables into a backpropagation neural network for training with 40 nodes in the first hidden layer and 20 nodes in the second hidden layer, with 50,000 iterations, an RMS error of 0.19, and a correlation coefficient of 0.67. In the manner of Paliwal et al (2003), the two-dimensional correlation coefficient matrix of each variable (m × n matrix, where m and n are sample size and number of variables, respectively) was found. Taking the absolute of each element and calculating the average of each column, this reduced the two-dimensional correlation coefficient matrixes to a one-dimensional correlation coefficient vector and rearranged each variable by correlation coefficient from low to high.…”
Section: Methodsmentioning
confidence: 99%
“…Model 6 was found by inputting 69 variables into a backpropagation neural network for training with 40 nodes in the first hidden layer and 20 nodes in the second hidden layer, with 50,000 iterations, an RMS error of 0.19, and a correlation coefficient of 0.67. In the manner of Paliwal et al (2003), the two-dimensional correlation coefficient matrix of each variable (m × n matrix, where m and n are sample size and number of variables, respectively) was found. Taking the absolute of each element and calculating the average of each column, this reduced the two-dimensional correlation coefficient matrixes to a one-dimensional correlation coefficient vector and rearranged each variable by correlation coefficient from low to high.…”
Section: Methodsmentioning
confidence: 99%
“…Network analyses (artificial neural network [ANN] and radial basis function [RBF]) have been utilized. Several authors have observed that ANNs can provide higher classification and prediction probability than conventional multivariate analysis for complex nonlinear data (Gardner and Bartlett 1993; Ping and Jun 1996; Brezmes and others 1997; Schaller and others 1998; Paliwal and others 2003; Ushada and Murase 2006). …”
Section: Introductionmentioning
confidence: 99%
“…They developed models with the use of neural networks, which resulted in a classification accuracy of 40-96%. Paliwal et al [9] used a combination of geometric features, grain surface texture and its color to develop a statistical model that used selected variables to identify varieties. They selected 20 features in each group.…”
Section: Introductionmentioning
confidence: 99%