Encyclopedia of Machine Learning 2011
DOI: 10.1007/978-0-387-30164-8_154
|View full text |Cite
|
Sign up to set email alerts
|

Concept Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(35 citation statements)
references
References 6 publications
0
35
0
Order By: Relevance
“…FN is the event that the model predicts the sample does not belong to a label when the sample has the label. Positive predictive value (PPV) is therefore defined as (Sammut & Webb, 2011): PPV=TPTP+FP=TPall0.16empositive0.16emresponses${\rm{PPV}} = \frac{{{\rm{TP}}}}{{{\rm{TP}} + {\rm{FP}}}} = \frac{{{\rm{TP}}}}{{{\rm{all}}\,{\rm{positive}}\,{\rm{responses}}}}$. Moreover, the complement of the PPV, the false discovery rate (FDR) is therefore defined as FDR=1PPV=FPTP+FP=FPall0.16empositive0.16emresponses${\rm{FDR}} = {\rm{1}} - {\rm{PPV}} = \frac{{{\rm{FP}}}}{{{\rm{TP}} + {\rm{FP}}}} = \frac{{{\rm{FP}}}}{{{\rm{all}}\,{\rm{positive}}\,{\rm{responses}}}}$.…”
Section: Methodsmentioning
confidence: 99%
“…FN is the event that the model predicts the sample does not belong to a label when the sample has the label. Positive predictive value (PPV) is therefore defined as (Sammut & Webb, 2011): PPV=TPTP+FP=TPall0.16empositive0.16emresponses${\rm{PPV}} = \frac{{{\rm{TP}}}}{{{\rm{TP}} + {\rm{FP}}}} = \frac{{{\rm{TP}}}}{{{\rm{all}}\,{\rm{positive}}\,{\rm{responses}}}}$. Moreover, the complement of the PPV, the false discovery rate (FDR) is therefore defined as FDR=1PPV=FPTP+FP=FPall0.16empositive0.16emresponses${\rm{FDR}} = {\rm{1}} - {\rm{PPV}} = \frac{{{\rm{FP}}}}{{{\rm{TP}} + {\rm{FP}}}} = \frac{{{\rm{FP}}}}{{{\rm{all}}\,{\rm{positive}}\,{\rm{responses}}}}$.…”
Section: Methodsmentioning
confidence: 99%
“…ML methods can have more meaningful results for various research topics [3,[22][23][24]. The important thing is to determine the correct algorithms to obtain the relevant results for the problem [9,[25][26][27]. ANN and fuzzy inference systems were preferred for this study because of their ability to learn and predict complex and nonlinear relationships.…”
Section: Machine Learning Algorithmsmentioning
confidence: 99%
“…Support Vector Machine (SVM): This is thought to be the optimal classifier for determining binary outcomes, such as benignity and malignancy. The theoretical ''hyperplane'' that separates these 2 outcomes exists in a multi-dimensional space, which consists of as many dimensions as there are the number of parameters [22,23]. 4.…”
Section: Decision Tree (Dt)mentioning
confidence: 99%
“…1 Flow chart of inclusion criteria for training and testing groups dimensional spaces, with each parameter considered to add one dimension, was introduced. To train and internally validate our predictive model as well as overcome dataset biases, a resampling technique known as k fold cross-validation was employed [23]. This technique randomly partitions the training group into k fold subsamples (k =1 0i n this case).…”
Section: Statistical Analysis and Artificial Intelligence Modelmentioning
confidence: 99%
See 1 more Smart Citation