2018
DOI: 10.1007/978-3-030-01090-4_23
|View full text |Cite
|
Sign up to set email alerts
|

PSense: Automatic Sensitivity Analysis for Probabilistic Programs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(13 citation statements)
references
References 15 publications
0
13
0
Order By: Relevance
“…Below we compare our result in detail with the most related results, i.e., [Barthe et al 2018], [Huang et al 2018b] and also a recent arXiv submission [Aguirre et al 2019]. Recall that we have discussed the issue of conditional branches at the end of Section 1.…”
Section: Experimental Examplesmentioning
confidence: 67%
See 4 more Smart Citations
“…Below we compare our result in detail with the most related results, i.e., [Barthe et al 2018], [Huang et al 2018b] and also a recent arXiv submission [Aguirre et al 2019]. Recall that we have discussed the issue of conditional branches at the end of Section 1.…”
Section: Experimental Examplesmentioning
confidence: 67%
“…At each loop iteration, a data i is chosen uniformly from all n training data and the parameters in w are adjusted by the product of the step size and the gradient of the ith loss function G i . The loop guard Φ can either be practical so that a fixed number of iterations is performed (as is analyzed in existing approaches [Barthe et al 2018;Hardt et al 2016;Huang et al 2018b]), or the local criteria that the magnitude ∇G 2 of the gradient of the total loss function G is small enough, or the global criteria that the value of G is small enough. In this paper, we consider the global criteria, i.e., the loop guard is of the form G(w) ≥ ζ where ζ is the threshold for "small enough".…”
Section: Motivating Examplesmentioning
confidence: 99%
See 3 more Smart Citations