2017
DOI: 10.48550/arxiv.1708.08719
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Better together? Statistical learning in models made of modules

Abstract: In modern applications, statisticians are faced with integrating heterogeneous data modalities relevant for an inference, prediction, or decision problem. In such circumstances, it is convenient to use a graphical model to represent the statistical dependencies, via a set of connected 'modules', each relating to a specific data modality, and drawing on specific domain expertise in their development.In principle, given data, the conventional statistical update then allows for coherent uncertainty quantification… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
54
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 32 publications
(54 citation statements)
references
References 47 publications
0
54
0
Order By: Relevance
“…As shown by Zigler et al [2013], performing the full Bayesian inference is possible, but leads to bias in the estimation of causal effects; see also Zigler [2016]. A review of various applications of modular inference is presented in Jacob et al [2017].…”
Section: Cutting Feedback In Bayesian Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…As shown by Zigler et al [2013], performing the full Bayesian inference is possible, but leads to bias in the estimation of causal effects; see also Zigler [2016]. A review of various applications of modular inference is presented in Jacob et al [2017].…”
Section: Cutting Feedback In Bayesian Modelsmentioning
confidence: 99%
“…As shown in Lemma 1 of [Yu et al, 2021], the cut posterior π cut (θ 1 , θ 2 ) minimizes the Kullback-Leibler divergence between the families of densities q(θ 1 , θ 2 ) having π 1,cut as their marginal distribution with respect to θ 1 , and the full posterior π(θ 1 , θ 2 ). Both Jacob et al [2017] and Yu et al [2021] propose data-driven methods to assess whether to cut feedback or not. Carmona and Nicholls [2020] extend the modular methodology to "semimodular" inference (SMI), where the user can regulate the amount of feedback passed on from the second module to the first one.…”
Section: Cutting Feedback In Bayesian Modelsmentioning
confidence: 99%
“…Methods that posit the capability of recovering the correct components of the outcome model using flexible modelling without reference to the propensity score also provide valid routes to inference about this target, but these methods often carry a heavier computational burden. There are also links to modularized Bayesian inference (Bayarri et al, 2009;Jacob et al, 2017) which also depend on a 'conscious misspecification' formulation, and in the causal setting (the main examples and the examples in section 8) existing frequentist semiparametric theory can give insight into the operating characteristics of such Bayesian analyses; see Pompe and Jacob (2021) for initial explorations in this direction.…”
Section: Discussionmentioning
confidence: 99%
“…Bayarri et al (2009) argue for a form of Bayesian inference based on 'modularization' of the model, where a form of stagewise analysis in complex models is used. Motivated by formulations based on Bayesian mis-specification, Jacob et al (2017) provide extensive evidence that such modularized inference can be advantageous in Bayesian settings, including a study of the empirical properties of propensity score regression estimators using the methods from section 3.1.…”
Section: Conscious Mis-specification and Modularizationmentioning
confidence: 99%
See 1 more Smart Citation