2013
DOI: 10.1080/02664763.2012.749050
|View full text |Cite
|
Sign up to set email alerts
|

Data mining with Rattle and R

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…. , J is the scale number (Mwitondi 2013;Percival and Walden 2006). The representation of MODWT is shown in Fig.…”
Section: Maximum Overlapping Discrete Wavelet Transform (Modwt)mentioning
confidence: 99%
“…. , J is the scale number (Mwitondi 2013;Percival and Walden 2006). The representation of MODWT is shown in Fig.…”
Section: Maximum Overlapping Discrete Wavelet Transform (Modwt)mentioning
confidence: 99%
“…The features from the training patterns has been implemented to construct DT [20]. The algorithm for design of DT is under given [14], [21].…”
Section: Decision Tree Classifier (Dt)mentioning
confidence: 99%
“…Datamining based classifier named the decision tree (DT) is suitable to discriminate PQDS. Decision Tree (DT) [14], [15] has been here chosen to discriminate the PQDS.…”
mentioning
confidence: 99%
“…The optimum number of variables randomly sampled at each split was estimated by the decrease in the out-of-bag (OOB) error, using the 'tuneRF' function implemented in the 'randomForest' package (Liaw and Wiener 2002). We assessed model performance based on several estimators that evaluated the efficiency, rate, and type of errors in the discriminatory capacity of binary models, including the OOB error, the area under the receiver-operating characteristic curve (AUC), and the rates of false positives (FP) and false negatives (FN) (Lalkhen and McCluskey 2008;Mwitondi 2013). The OOB is the misclassification rate of the random forest model estimated for the training data, with higher values indicating models having lower classification accuracy (Mwitondi 2013).…”
Section: Data Analysesmentioning
confidence: 99%
“…We assessed model performance based on several estimators that evaluated the efficiency, rate, and type of errors in the discriminatory capacity of binary models, including the OOB error, the area under the receiver-operating characteristic curve (AUC), and the rates of false positives (FP) and false negatives (FN) (Lalkhen and McCluskey 2008;Mwitondi 2013). The OOB is the misclassification rate of the random forest model estimated for the training data, with higher values indicating models having lower classification accuracy (Mwitondi 2013). The AUC facilitated evaluation of the predictive efficiency of the model; its values ranged from 0.5 for models having predictive capacity similar to chance, to 1.0 for models having perfect predictive ability (Araújo et al 2005).…”
Section: Data Analysesmentioning
confidence: 99%