We present Dartagnan, a bounded model checker (BMC) for concurrent programs under weak memory models. Its distinguishing feature is that the memory model is not implemented inside the tool but taken as part of the input. Dartagnan reads CAT, the standard language for memory models, which allows to define x86/TSO, ARMv7, ARMv8, Power, C/C++, and Linux kernel concurrency primitives. BMC with memory models as inputs is challenging. One has to encode into SMT not only the program but also its semantics as defined by the memory model. What makes Dartagnan scale is its relation analysis, a novel static analysis that significantly reduces the size of the encoding. Dartagnan matches or even exceeds the performance of the model-specific verification tools Nidhugg and CBMC, as well as the performance of Herd, a CAT-compatible litmus testing tool. Compared to the unoptimized encoding, the speed-up is often more than two orders of magnitude.
We present porthos, the first tool that discovers porting bugs in performance-critical code. porthos takes as input a program and the memory models of the source architecture for which the program has been developed and the target model to which it is ported. If the code is not portable, porthos finds a bug in the form of an unexpected execution -an execution that is consistent with the target but inconsistent with the source memory model. Technically, porthos implements a bounded model checking method that reduces the portability analysis problem to satisfiability modulo theories (SMT). There are two main problems in the reduction that we present novel and efficient solutions for. First, the formulation of the portability problem contains a quantifier alternation (consistent + inconsistent). We introduce a formula that encodes both in a single existential query. Second, the supported memory models (e.g., Power) contain recursive definitions. We compute the required least fixed point semantics for recursion (a problem that was left open in [47]) efficiently in SMT. Finally we present the first experimental analysis of portability from TSO to Power.ConsistentT SO
Abstract-To improve performance, multiprocessor systems implement weak memory consistency models -and a number of models have been developed over the past years. Weak memory models, however, lead to unforeseen program behavior, and there is a current need for memory model-aware program analysis techniques. The problem is that every memory model calls for new verification algorithms.We study a prominent approach to program analysis: testing. The testing problem takes as input sequences of operations, one for each process in the concurrent program. The task is to check whether these sequences can be interleaved to an execution of the entire program that respects the constraints of the memory model. We determine the complexity of the testing problem for most of the known memory models. Moreover, we study the impact on the complexity of parameters like the number of concurrent processes, the length of their executions, and the number of shared variables.What differentiates our approach from related results is a unified analysis. Instead of considering one memory model after the other, we build upon work of Steinke and Nutt. They showed that the existing memory models form a natural hierarchy where one model is called weaker than another one if it includes the latter's behavior [33].Using the Steinke-Nutt hierarchy, we develop three general concepts that allow us to quickly determine the complexity of a testing problem. (i) We generalize the technique of problem reductions from complexity theory. So-called range reductions propagate hardness results between memory models. We apply them to establish NP-lower bounds for the stronger memory models. (ii) For the weaker models, we present polynomial-time testing algorithms. Here, the common idea is determinization. (iii) Finally, we give a single SAT encoding of the testing problem that works for all memory models in the Steinke-Nutt hierarchy. It shows membership in NP. Our results are general enough to classify future weak memory models and ensure that SAT solvers are adequate tools for their analysis.
This paper reports progress in verification tool engineering for weak memory models. We present two bounded model checking tools for concurrent programs. Their distinguishing feature is modularity: Besides a program, they expect as input a module describing the hardware architecture for which the program should be verified. DARTAGNAN verifies state reachability under the given memory model using a novel SMT encoding. PORTHOS checks state equivalence under two given memory models using a guided search strategy. We have performed experiments to compare our tools against other memory modelaware verifiers and find them very competitive, despite the modularity offered by our approach.
To improve the performance of the memory system, multiprocessors implement weak memory consistency models. Weak memory models admit different views of the processes on their load and store instructions, thus allowing for computations that are not sequentially consistent. Program analyses have to take into account the memory model of the targeted hardware. This is challenging because numerous memory models have been developed, and every memory model requires its own analysis. In this article, we study a prominent approach to program analysis: testing. The testing problem takes as input sequences of operations, one for each process in the concurrent program. The task is to check whether these sequences can be interleaved to an execution of the entire program that respects the constraints of a memory model under consideration. We determine the complexity of the testing problem for most of the known memory models. Moreover, we study the impact on the complexity of parameters, such as the number of concurrent processes, the length of their executions, and the number of shared variables. What differentiates our contribution from related results is a uniform approach that avoids considering each memory model on its own. We build upon work of Steinke and Nutt. They showed that the existing memory models form a hierarchy where one model is called weaker than another one if it includes the latter’s behavior. Using the Steinke-Nutt hierarchy, we develop three general concepts that allow us to quickly determine the complexity of a testing problem. First, we generalize the technique of problem reductions from complexity theory. So-called range reductions propagate hardness results between memory models, and we apply them to establish NP lower bounds for the stronger memory models. Second, for the weaker models, we present polynomial-time testing algorithms that are inspired by determinization algorithms for automata. Finally, we describe a single SAT encoding of the testing problem that works for all memory models in the Steinke-Nutt hierarchy to prove their membership in NP . Our results are general enough to carry over to future weak memory models. Moreover, they show that SAT solvers are adequate tools for testing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.