2021
DOI: 10.48550/arxiv.2104.14821
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Interpretability of Epidemiological Models : The Curse of Non-Identifiability

Abstract: Interpretability of epidemiological models is a key consideration, especially when these models are used in a public health setting. Interpretability is strongly linked to the identifiability of the underlying model parameters, i.e., the ability to estimate parameter values with high confidence given observations. In this paper, we define three separate notions of identifiability that explore the different roles played by the model definition, the loss function, the fitting methodology, and the quality and qua… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 11 publications
(27 reference statements)
0
1
0
Order By: Relevance
“…Different stakeholders have different needs for explanation [12,75], but these needs are not often well-articulated or distinguished from each other [38,41,54,65,84]. Clarity on the intended use of explanation is crucial to select an appropriate XAI tool, as specialized methods exist for specific needs like debugging [39], formal verification (safety) [18,28,85], uncertainty quantification [1,79], actionable recourse [40,76], mechanism inference [20], causal inference [11,26,62], robustness to adversarial inputs [48,52], data accountability [87], social transparency [23], interactive personalization [78], and fairness and algorithmic bias [60] . In contrast, feature importance methods like LIME [66] and SHAP [49,50] focus exclusively on computing quantitative evidence for indicative conditionals [10,30] (of the form "If the applicant doesn't have enough income, then she won't get the loan approved"), with some newer counterfactual explanation methods [8,56,72] and negative contrastive methods [51] finding similar evidence for subjunctive conditionals [14,64] (of the form "If the applicant increases her income, then she would get the loan approved") .…”
Section: The Challengesmentioning
confidence: 99%
“…Different stakeholders have different needs for explanation [12,75], but these needs are not often well-articulated or distinguished from each other [38,41,54,65,84]. Clarity on the intended use of explanation is crucial to select an appropriate XAI tool, as specialized methods exist for specific needs like debugging [39], formal verification (safety) [18,28,85], uncertainty quantification [1,79], actionable recourse [40,76], mechanism inference [20], causal inference [11,26,62], robustness to adversarial inputs [48,52], data accountability [87], social transparency [23], interactive personalization [78], and fairness and algorithmic bias [60] . In contrast, feature importance methods like LIME [66] and SHAP [49,50] focus exclusively on computing quantitative evidence for indicative conditionals [10,30] (of the form "If the applicant doesn't have enough income, then she won't get the loan approved"), with some newer counterfactual explanation methods [8,56,72] and negative contrastive methods [51] finding similar evidence for subjunctive conditionals [14,64] (of the form "If the applicant increases her income, then she would get the loan approved") .…”
Section: The Challengesmentioning
confidence: 99%