“…5 4 24,520 138,925 99.78% (Zhao and Hoi, 2013) Classic Perceptron 990,000 10,000 99.49% (Patil and Patil, 2018) Random Forest 26,041 26,041 99.44% (Zhao and Hoi, 2013) Label Efficient Perceptron 990,000 10,000 99.41% (Chen et al, 2014) Logistic Regression 1,945 404 99.40% (Cui et al, 2018) SVM 24,520 138,925 99.39% (Patil and Patil, 2018) Fast Decision Tree Learner REPTree26,041 26,041 99.19% (Zhao and Hoi, 2013) Cost-sensitive Perceptron 990,000 10,000 99.18% (Patil and Patil, 2018) C A R T 5 26,041 26,041 99.15% (Jain and Gupta, 2018b) Random Forest 2,141 1,918 99.09% (Patil and Patil, 2018) J 4 8 6 26,041 26,041 99.03% (Verma and Dyer, 2015) J48 11,271 13,274 99.01% (Verma and Dyer, 2015) P A R T 7 11,271 13,274 98.98% (Verma and Dyer, 2015) Random Forest 11,271 13,274 98.88% (Shirazi et al, 2018) Gradient Boosting 1,000 1,000 98,78% (Cui et al, 2018) Naïve-Bayes 24,520 138,925 98,72% (Cui et al, 2018) C4.5 356,215 2,953,700 98.70% (Patil and Patil, 2018) Alternating Decision Tree 26,041 26,041 98.48% (Shirazi et al, 2018) SVM (Linear) 1,000 1,000 98,46% (Shirazi et al, 2018) CART 1,000 1,000 98,42% (Adebowale et al, 2019) Adaptive Neuro-Fuzzy Inference System 6,843 6,157 98.30% (Vanhoenshoven et al, 2016) Random Forest 1,541,000 759,000 98.26% (Jain and Gupta, 2018b) Logistic Regression 2,141 1,918 98.25% (Patil and Patil, 2018) Random Tree 26,041 26,041 98.18% (Shirazi et al, 2018) k-Nearest Neighbuors 1,000 1,000 98,05% (Vanhoenshoven et al, 2016) Multi Layer Perceptron 1,541,000 759,000 97.97% (Verma and Dyer, 2015) Logistic Regression 11,271 13,274 97.70% (Jain and Gupta, 2018b) Naïve-Bayes 2,141 1,918 97.59% (Vanhoenshoven et al, 2016) k-Nearest Neighbours 1,541,000 759,000 97.54% (Shirazi et al, 2018) SVM (Gaussian) 1,000 1,000 97,42% (Vanhoenshoven et al, 2016) C 5 . 0 8 1,541,000 759,000 97.40%…”