Abstract. Conformance testing for finite state machines and regular inference both aim at identifying the model structure underlying a black box system on the basis of a limited set of observations. Whereas the former technique checks for equivalence with a given conjecture model, the latter techniques addresses the corresponding synthesis problem by means of techniques adopted from automata learning. In this paper we establish a common framework to investigate the similarities of these techniques by showing how results in one area can be transferred to results in the other and to explain the reasons for their differences.
The original publication is available at www.springerlink.comInternational audienceIn this paper, we present the LearnLib, a library of tools for automata learning, which is explicitly designed for the systematic experimental analysis of the profile of available learning algorithms and corresponding optimizations. Its modular structure allows users to configure their own tailored learning scenarios, which exploit specific properties of their envisioned applications. As has been shown earlier, exploiting application-specific structural features enables optimizations that may lead to performance gains of several orders of magnitude, a necessary precondition to make automata learning applicable to realistic scenarios
Abstract. Techniques for inferring a regular language, in the form of a finite automaton, from a sufficiently large sample of accepted and nonaccepted input words, have been employed to construct models of software and hardware systems, for use, e.g., in test case generation. We intend to adapt these techniques to construct state machine models of entities of communication protocols. The alphabet of such state machines can be very large, since a symbol typically consists of a protocol data unit type with a number of parameters, each of which can assume many values. In typical algorithms for regular inference, the number of needed input words grows with the size of the alphabet and the size of the minimal DFA accepting the language. We therefore modify such an algorithm (Angluin's algorithm) so that its complexity grows not with the size of the alphabet, but only with the size of a certain symbolic representation of the DFA. The main new idea is to infer, for each state, a partitioning of input symbols into equivalence classes, under the hypothesis that all input symbols in an equivalence class have the same effect on the state machine. Whenever such a hypothesis is disproved, equivalence classes are refined. We show that our modification retains the good properties of Angluin's original algorithm, but that its complexity grows with the size of our symbolic DFA representation rather than with the size of the alphabet. We have implemented the algorithm; experiments on synthesized examples are consistent with these complexity results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.