2014
DOI: 10.1007/978-3-319-13338-6_7
|View full text |Cite
|
Sign up to set email alerts
|

Synthesizing Finite-State Protocols from Scenarios and Requirements

Abstract: Abstract. Scenarios, or Message Sequence Charts, offer an intuitive way of describing the desired behaviors of a distributed protocol. In this paper we propose a new way of specifying finite-state protocols using scenarios: we show that it is possible to automatically derive a distributed implementation from a set of scenarios augmented with a set of safety and liveness requirements, provided the given scenarios adequately cover all the states of the desired implementation. We first derive incomplete state mac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
28
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 37 publications
(30 citation statements)
references
References 27 publications
2
28
0
Order By: Relevance
“…"Timeout" means that the algorithm was unable to terminate within the given time limit (60 seconds) in any of the 5 experiments with 150 states. 6 As expected, MooreMI always achieves 100% accuracy, since the input is a characteristic sample (we verified that indeed the machines learned by MooreMI are in each case equivalent to the original machine that produced the training 5 The random generation procedure takes as inputs a random seed, the number of states, and the sizes of the input and output alphabets of the machine. Two intermediate steps are worth mentioning: (1) After assigning a random output to each state, we fix a random permutation of states and assign the i-th output to the i-th state.…”
Section: Experimental Comparisonsupporting
confidence: 71%
See 3 more Smart Citations
“…"Timeout" means that the algorithm was unable to terminate within the given time limit (60 seconds) in any of the 5 experiments with 150 states. 6 As expected, MooreMI always achieves 100% accuracy, since the input is a characteristic sample (we verified that indeed the machines learned by MooreMI are in each case equivalent to the original machine that produced the training 5 The random generation procedure takes as inputs a random seed, the number of states, and the sizes of the input and output alphabets of the machine. Two intermediate steps are worth mentioning: (1) After assigning a random output to each state, we fix a random permutation of states and assign the i-th output to the i-th state.…”
Section: Experimental Comparisonsupporting
confidence: 71%
“…5 From each such machine, we generated a characteristic sample, and ran each of the three algorithms on this characteristic sample, i.e., using it as the training set. Then we took the learned machines generated by the algorithms, and evaluated these machines in terms of size (# states) and accuracy.…”
Section: Experimental Comparisonmentioning
confidence: 99%
See 2 more Smart Citations
“…We randomly generated several minimal Moore machines of sizes 50 and 150 states, and input and alphabet sizes |I| = |O| = 25 . 5 From each such machine, we generated a characteristic sample, and ran each of the three algorithms on this characteristic sample, i.e., using it as the training set. Then we took the learned machines generated by the algorithms, and evaluated these machines in terms of size (# states) and accuracy.…”
Section: Experimental Comparisonmentioning
confidence: 99%