1981
DOI: 10.1109/tpami.1981.4767102
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian and Decision Tree Approaches for Pattern Recognition Including Feature Measurement Costs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

1987
1987
2021
2021

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 30 publications
(10 citation statements)
references
References 10 publications
0
10
0
Order By: Relevance
“…Moreover, in entry level of the industry, e-health sensors will gather all sensors data which will used for training our system. In our proposed system, a probabilistic model called Bayesian Decision Tree [57,58] has been used for training and validation to get the health condition score for fitness of the employee. We have used this mixture model of decision tree and Bayesian theory to handle categorical variables and missing data in principled way to get better performance.…”
Section: Health Condition Detectionmentioning
confidence: 99%
“…Moreover, in entry level of the industry, e-health sensors will gather all sensors data which will used for training our system. In our proposed system, a probabilistic model called Bayesian Decision Tree [57,58] has been used for training and validation to get the health condition score for fitness of the employee. We have used this mixture model of decision tree and Bayesian theory to handle categorical variables and missing data in principled way to get better performance.…”
Section: Health Condition Detectionmentioning
confidence: 99%
“…Each node in the output layer is associated with each sample label. ID3 can be acquired according to Dattatreya (Dattatreya and Sarma 1981), but this method is redundant to calculated posterior probability. Thus, steps of simplifying the tree and enhancing its generalization ability are performed using post-pruning algorithm (Murthy et al 2001).…”
Section: Posterior Probability Computationmentioning
confidence: 99%
“…To determine the optimal K 1 and K 2 , the misclassification rates on the whole training samples are estimated. Step2 Compute K 1 × K 2 misclassification rates from the combination of the K 1 classifiers with K 2 features on (Paliwal and Rao 1983) pp ¼ l p k ; l p > l n À Á l n k ; l p ≤l n À Á 2 6 4 l p : numbers of positive feature points l n : numbers of negative feature points k: total numbers of feature points LDA (Martinez and Kak 2001) (Dattatreya and Sarma 1981) …”
Section: Improved Majority Rulementioning
confidence: 99%
“…This will be available with the system. Suppose that the performance constraints available from the analysis of the problem give the following matrix for cost of misclassification : [12] can be used to get the required decision tree. Figure 2 gives the resultant decision tree [12].…”
Section: System For Design Of Simple Pr Problemsmentioning
confidence: 99%