2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT) 2013
DOI: 10.1109/icccnt.2013.6726477
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of data mining classification algorithms for breast cancer prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0
1

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 73 publications
(28 citation statements)
references
References 3 publications
0
27
0
1
Order By: Relevance
“…Three different data mining classification algorithms for prediction namely decision tree, Naïve Bayes, and K-Nearest Neighbor [8] with the help of WEKA (Waikato Environment for Knowledge Analysis),which is an open source software, have been compared for prediction of cancer. It has been concluded that Naïve Bayes is a superior algorithm compared to the two others.…”
Section: Related Workmentioning
confidence: 99%
“…Three different data mining classification algorithms for prediction namely decision tree, Naïve Bayes, and K-Nearest Neighbor [8] with the help of WEKA (Waikato Environment for Knowledge Analysis),which is an open source software, have been compared for prediction of cancer. It has been concluded that Naïve Bayes is a superior algorithm compared to the two others.…”
Section: Related Workmentioning
confidence: 99%
“…The unknown tuple in K-NN is assigned to most common class among its K-nearest neighbors. When K = 1, the unknown tuple is assigned the class of the training tuple that is closest in the pattern space [28].…”
Section: K-nearest Neighbormentioning
confidence: 99%
“…When K = 1, the unknown tuple is assigned the class of the 1 training tuple that is closest in the pattern space. This method is also called the lazy learner method as it simply stores the training data and waits until it is given with test data [2].…”
Section: K-nearest Neighbourmentioning
confidence: 99%
“…Random forest is often used when we have large training data sets and large number of input variables. At the end, this method builds many decision trees [2].…”
Section: Literature Review 21 Decision Treementioning
confidence: 99%
See 1 more Smart Citation