Proceedings of the 3rd International Conference on Electronics, Communications and Control Engineering 2020
DOI: 10.1145/3396730.3396736
|View full text |Cite
|
Sign up to set email alerts
|

How Similar is Similar

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 2 publications
0
1
0
Order By: Relevance
“…Within this investigation, we assess the influence on the features and performance of CA-CMA by comparing various classifier models, aiming to determine the most suitable classification model. Specifically, we made no alterations to the feature extraction methods while substituting the DNN and CNN fusion model with five distinct classifiers, including KNN (K-nearest neighbor) [ 35–37 ], LR (logistic regression) [ 38 , 39 ], RF (rotation forest) [ 40–42 ], SVM (support vector machine) [ 39 , 43 ], AdaBoost algorithm [ 44 ] and GBDT (gradient boosting decision tree) [ 45–47 ] for research. Table 5 showcases the average outcomes obtained from the 5-fold CV experiments conducted by the aforementioned models on the identical dataset, and this information is also depicted in Figure 7 .…”
Section: Resultsmentioning
confidence: 99%
“…Within this investigation, we assess the influence on the features and performance of CA-CMA by comparing various classifier models, aiming to determine the most suitable classification model. Specifically, we made no alterations to the feature extraction methods while substituting the DNN and CNN fusion model with five distinct classifiers, including KNN (K-nearest neighbor) [ 35–37 ], LR (logistic regression) [ 38 , 39 ], RF (rotation forest) [ 40–42 ], SVM (support vector machine) [ 39 , 43 ], AdaBoost algorithm [ 44 ] and GBDT (gradient boosting decision tree) [ 45–47 ] for research. Table 5 showcases the average outcomes obtained from the 5-fold CV experiments conducted by the aforementioned models on the identical dataset, and this information is also depicted in Figure 7 .…”
Section: Resultsmentioning
confidence: 99%