Intelligent Information Processing and Web Mining 2003
DOI: 10.1007/978-3-540-36562-4_23
|View full text |Cite
|
Sign up to set email alerts
|

A Hybrid Genetic Algorithm — Decision Tree Classifier

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 3 publications
0
3
0
Order By: Relevance
“…The optimal performance based on F1-scores, Precision, Recall and accuracy are reported for (CNN and DNN) compared to the other traditional techniques. The DNN-based model; on the three datasets; recorded highest results of average of 88% while Decision Tree 38 recorded the lowest results of 67%. This nominates the proposed DNN model for similar mining tasks and supports its efficiency and performance.…”
Section: Experimental and Discussionmentioning
confidence: 93%
See 1 more Smart Citation
“…The optimal performance based on F1-scores, Precision, Recall and accuracy are reported for (CNN and DNN) compared to the other traditional techniques. The DNN-based model; on the three datasets; recorded highest results of average of 88% while Decision Tree 38 recorded the lowest results of 67%. This nominates the proposed DNN model for similar mining tasks and supports its efficiency and performance.…”
Section: Experimental and Discussionmentioning
confidence: 93%
“…Results in this section are reported using 10-fold cross-validation, which are carried out by partitioning the owner dataset (Genera_Data) into 10 folds where 9 are used for training and one for testing, then this procedure is repeated 10 times averaging results. In the following a comparative analysis of two implemented deep learning models; (1) Basic CNN holds identical configuration in 34 and (2) our proposed DNN architecture; with traditional techniques (Decision tree 35 , 36 , SVM 17 , Logistic Regression 37 and Naïve Bayes 38 ) is presented. Also each approach is analyzed from variant permutation of features, we name as models (M(1) to M(2)) to check optimal findings.…”
Section: Experimental and Discussionmentioning
confidence: 99%
“…In the semi-supervised hashing (SSH) category, several methods are proposed that not only utilize images with a few labels, but also utilize images with a set of labels. These methods generate hash codes and minimize the empirical errors between pairwise data in order to avoid over-fitting [22]. We can refer to the bootstrap sequential projection learning (BSPLH) technique as an example of SSH methods [23].…”
Section: Related Workmentioning
confidence: 99%