2001
DOI: 10.1016/s0888-613x(01)00039-1
|View full text |Cite
|
Sign up to set email alerts
|

Learning Bayesian network parameters from small data sets: application of Noisy-OR gates

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
145
0

Year Published

2004
2004
2021
2021

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 250 publications
(147 citation statements)
references
References 14 publications
2
145
0
Order By: Relevance
“…Their foremost advantage is a small number of parameters that are sufficient to specify the entire CPT. This leads to a significant reduction of effort in knowledge elicitation from experts [3,5], improves the quality of distributions learned from data [8], and reduces the spacial and temporal complexity of algorithms for Bayesian networks [9,10].…”
Section: Introductionmentioning
confidence: 99%
“…Their foremost advantage is a small number of parameters that are sufficient to specify the entire CPT. This leads to a significant reduction of effort in knowledge elicitation from experts [3,5], improves the quality of distributions learned from data [8], and reduces the spacial and temporal complexity of algorithms for Bayesian networks [9,10].…”
Section: Introductionmentioning
confidence: 99%
“…Parameter learning aims to determine the conditional probability distribution of each node under the established BN structure [11,12]. The conditional probability tables can be determined by learning the parameters on the database using a learning algorithm.…”
Section: Bn Learningmentioning
confidence: 99%
“…The authors of these systems have reported that the sizes of the underlying networks are superlinear in the number of trials [44,45], and that the training time is superlinear in the network size [21,22]. …”
Section: Scalabilitymentioning
confidence: 99%