2008
DOI: 10.1016/j.nima.2008.04.006
|View full text |Cite
|
Sign up to set email alerts
|

Applying Bayesian neural networks to event reconstruction in reactor neutrino experiments

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2008
2008
2020
2020

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 4 publications
0
11
0
Order By: Relevance
“…Unlike before, we then re-cluster the constituents into anti-k T subjets and re-scale the energy of each subjet (or cluster) by a random number drawn from a Gaussian distribution with mean zero and a given standard deviation. After this shift the predictive uncertainty given by the network before sigmoid becomes σ + σ (unconstr) )/2 (23) In Fig. 11 we show this effect separately for a sample of top and QCD jets.…”
Section: Systematic Uncertainty From Energy Scalementioning
confidence: 94%
See 1 more Smart Citation
“…Unlike before, we then re-cluster the constituents into anti-k T subjets and re-scale the energy of each subjet (or cluster) by a random number drawn from a Gaussian distribution with mean zero and a given standard deviation. After this shift the predictive uncertainty given by the network before sigmoid becomes σ + σ (unconstr) )/2 (23) In Fig. 11 we show this effect separately for a sample of top and QCD jets.…”
Section: Systematic Uncertainty From Energy Scalementioning
confidence: 94%
“…Like all classifying neural networks, BNNs [21][22][23] relate training data D to a known output or classifier C through a set of network parameters ω. Bayes' theorem then defines the (posterior) probability distribution for the parameters p(ω|{D, C}) from the general relation…”
Section: Bayesian Neural Networkmentioning
confidence: 99%
“…While standard neural networks adapt a set of weights ω to describe a general function based on some kind of training, Bayesian networks learn weight distributions [47][48][49][50][51][52]. Sampling over those ω-distributions gives us access to uncertainties in the network output, induced by limitations of the training data.…”
Section: Bayesian Regressionmentioning
confidence: 99%
“…One can also bootstrap the training data for fixed weight initialization to uniquely probe the statistical uncertainty from the training set size. An automated approach to estimate these uncertainties that does not require retraining multiple networks is Bayesian Neural Networks [56][57][58][59][60]. Estimating the uncertainty from the input feature accuracy can be performed by varying the inputs within their systematic uncertainty (see Sec.…”
Section: Sources Of Uncertainty 41 Overviewmentioning
confidence: 99%