2016
DOI: 10.7287/peerj.preprints.2187v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Enhancing genetic algorithms using multi mutations

Abstract: Mutation is one of the most important stages of the genetic algorithm because of its impact on the exploration of global optima, and to overcome premature convergence. There are many types of mutation, and the problem lies in selection of the appropriate type, where the decision becomes more difficult and needs more trial and error. This paper investigates the use of more than one mutation operator to enhance the performance of genetic algorithms. Novel mutation operators are proposed, in addition to two selec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 1 publication
0
6
0
Order By: Relevance
“…The average precision value improvement by our approach over all baseline approaches is 0.051%, 0.049%, 0.019%, 0.04%, 0.034%, and 0.016% compared with LDA [2], [20][21][22][23], SVM [2,24], KNN [2,[25][26][27], LR [2,28], NB [2,29], and DT [2,30], respectively. In terms of recall, our approach obtains 0.988%.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The average precision value improvement by our approach over all baseline approaches is 0.051%, 0.049%, 0.019%, 0.04%, 0.034%, and 0.016% compared with LDA [2], [20][21][22][23], SVM [2,24], KNN [2,[25][26][27], LR [2,28], NB [2,29], and DT [2,30], respectively. In terms of recall, our approach obtains 0.988%.…”
Section: Resultsmentioning
confidence: 99%
“…In terms of accuracy, our approach obtains 0.988%. The average accuracy value improvement by our approach over all baseline approaches is 0.081%, 0.069%, 0.031%, 0.072%, 0.08%, and 0.026% compared with LDA [2,31], SVM [2,24], KNN [2,[25][26][27], LR [2,28], NB [2,29], and DT [2,30], respectively. All of these results indicate that the unified learning approach is more effective compared with the baseline approaches.…”
Section: Resultsmentioning
confidence: 99%
“…To avoid the influence of sample imbalance on KNN classification, our dataset has an equal number of positive and negative samples. When testing the dataset, considering the square root rule [27,28], the value of parameter K is set to 35.…”
Section: Knnmentioning
confidence: 99%
“…It was observed that as the number of nodes increased, the processing speed increased and more unsuccessful results were obtained. Hassanat et al (2016), compared several mutation operators in their study and observed that using of more than one mutation, increased the success of the algorithm. Therefore, in this study, the algorithm has been developed by using flip, swap and slide mutation operators together.…”
Section: Travelling Salesman Problem and Genetic Algorithmmentioning
confidence: 99%