2022
DOI: 10.48550/arxiv.2201.10859
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Visualizing the diversity of representations learned by Bayesian neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Intuitively, in contrast to the MAP learning, where point estimates of weights represent one deterministic decision-making strategy, a posterior distribution represents an infinite ensemble of models, which employ different strategies towards the prediction. By aggregating the variability of the decision-making processes of networks, we can obtain a broader outlook on the features that were used for the prediction, and thus deeper insights into the models' behavior (Grinwald et al 2022).…”
Section: Introductionmentioning
confidence: 99%
“…Intuitively, in contrast to the MAP learning, where point estimates of weights represent one deterministic decision-making strategy, a posterior distribution represents an infinite ensemble of models, which employ different strategies towards the prediction. By aggregating the variability of the decision-making processes of networks, we can obtain a broader outlook on the features that were used for the prediction, and thus deeper insights into the models' behavior (Grinwald et al 2022).…”
Section: Introductionmentioning
confidence: 99%