2019
DOI: 10.1007/978-3-030-29436-6_16
|View full text |Cite
|
Sign up to set email alerts
|

Computing Expected Runtimes for Constant Probability Programs

Abstract: We introduce the class of constant probability (CP) programs and show that classical results from probability theory directly yield a simple decision procedure for (positive) almost sure termination of programs in this class. Moreover, asymptotically tight bounds on their expected runtime can always be computed easily. Based on this, we present an algorithm to infer the exact expected runtime of any CP program.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 32 publications
(125 reference statements)
0
10
0
Order By: Relevance
“…We present our experimental results by separating our benchmarks within three categories: (i) 21 programs which are PAST (Table 1), (ii) 11 programs which are AST (Table 2) but not necessarily PAST, and (iii) 6 programs which are not AST ( Table 3). The benchmarks have either been introduced in the literature on probabilistic programming [42,10,4,22,38], are adaptations of well-known stochastic processes or have been designed specifically to test unique features of AMBER, like the ability to handle polynomial real arithmetic.…”
Section: Experimental Setting and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We present our experimental results by separating our benchmarks within three categories: (i) 21 programs which are PAST (Table 1), (ii) 11 programs which are AST (Table 2) but not necessarily PAST, and (iii) 6 programs which are not AST ( Table 3). The benchmarks have either been introduced in the literature on probabilistic programming [42,10,4,22,38], are adaptations of well-known stochastic processes or have been designed specifically to test unique features of AMBER, like the ability to handle polynomial real arithmetic.…”
Section: Experimental Setting and Resultsmentioning
confidence: 99%
“…With these criteria, 10 out of the 50 original benchmarks of [10] and [42] remain. We add 11 additional benchmarks which have either been introduced in the literature on probabilistic programming [4,22,38], are adaptations of well-known stochastic processes or have been designed specifically to test unique features of AMBER. Notably, out of the 50 original benchmarks from [42] and [10], only 2 remain which are included in our benchmarks and which AMBER cannot prove PAST (because they are not Prob-solvable).…”
Section: Experimental Setting and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We present our experimental results by separating our benchmarks within three categories: (i) 21 programs which are PAST (Table 1), (ii) 11 programs which are AST (Table 2) but not necessarily PAST, and (iii) 6 programs which are not AST (Table 3). The benchmarks have either been introduced in the literature on probabilistic programming [40,9,4,21,37], are adaptations of well-known stochastic processes or have been designed specifically to test unique features of AMBER, like the ability to handle polynomial real arithmetic.…”
Section: Experimental Setting and Resultsmentioning
confidence: 99%
“…With these criteria, out of the 50 original benchmarks of [9] and [40] 10 remain. We augment these 10 programs with 11 additional benchmarks which have either been introduced in the literature on probabilistic programming [4,21,37], are adaptations of well-known stochastic processes or have been designed specifically to test unique features of AMBER. Notably, out of the 50 original benchmarks from [40] and [9], only 2 remain which are included in our benchmarks and which AMBER cannot prove PAST (because they are not Prob-solvable).…”
Section: Experimental Setting and Resultsmentioning
confidence: 99%