2019
DOI: 10.1109/access.2019.2918026
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Intentions of Students for Master Programs Using a Chaos-Induced Sine Cosine-Based Fuzzy K-Nearest Neighbor Classifier

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
41
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
9

Relationship

4
5

Authors

Journals

citations
Cited by 96 publications
(41 citation statements)
references
References 52 publications
0
41
0
Order By: Relevance
“…Finally, the optimal FKNN classifier is taken to determine whether the specific patient is severe or nonsevere. The commonly used 10-fold cross-validation (CV) scheme is used to divide the data and obtain more accurate and unbiased experimental results, often adopted by many studies [67]- [74].…”
Section: Hho-fknn Methodsmentioning
confidence: 99%
“…Finally, the optimal FKNN classifier is taken to determine whether the specific patient is severe or nonsevere. The commonly used 10-fold cross-validation (CV) scheme is used to divide the data and obtain more accurate and unbiased experimental results, often adopted by many studies [67]- [74].…”
Section: Hho-fknn Methodsmentioning
confidence: 99%
“…Since its inception, SCA has been extensively utilized in various applications such as wind speed forecast [76], time series prediction [77], prediction of the intention of students for postgraduate entrance examination [78], and prediction of the entrepreneurial intention of students [79]. Moreover, many improved SCA variants have been proposed to improve its performance.…”
Section: Olscamentioning
confidence: 99%
“…In order to study the performance of the proposed strategy, some competitive MAs, including the MSCA [83], OBSCA, chaos enhanced SCA (CESCA) [78], bat algorithm (BA) [92], MFO, firefly algorithm (FA) [93], PSO and SCA are made a comparison on 23 well-known benchmark functions. The full parameter values of these involved algorithms are reported in Table 2.…”
Section: A Benchmark Function Validationmentioning
confidence: 99%
“…We also used machine learning models to perform comparison between the results if machine learning models and the proposed DS-MLP. Five machine learning models such as logistic regression (LR) [36], [37], random forest (RF) [38], [39], decision tree (DT) [40], support vector machine (SVM) [41], K nearest neighbour (KNN) [42] and Gaussian Naive Bayes (GNB) [43] algorithms are used for this purpose. We use RF with 300 n_estimators which means that 300 decision trees are constructed by RF on each example to give the predictions and then perform majority voting between 300 predictions to make the final prediction.…”
Section: B Comparison With Other Machine Learning Modelsmentioning
confidence: 99%