Modern embedded systems integrate more and more complex functionalities. At the same time, the semiconductor technology advances enable to increase the amount of hardware resources on a chip for the execution. Massively parallel embedded systems specifically deal with the optimized usage of such hardware resources to efficiently execute their functionalities. The design of these systems mainly relies on the following challenging issues: first, how to deal with the parallelism in order to increase the performance; second, how to abstract their implementation details in order to manage their complexity; third, how to refine these abstract representations in order to produce efficient implementations. This article presents the Gaspard design framework for massively parallel embedded systems as a solution to the preceding issues. Gaspard uses the repetitive Model of Computation (MoC), which offers a powerful expression of the regular parallelism available in both system functionality and architecture. Embedded systems are designed at a high abstraction level with the MARTE (Modeling and Analysis of Real-time and Embedded systems) standard profile, in which our repetitive MoC is described by the so-called Repetitive Structure Modeling (RSM) package. Based on the Model-Driven Engineering (MDE) paradigm, MARTE models are refined towards lower abstraction levels, which make possible the design space exploration. By combining all these capabilities, Gaspard allows the designers to automatically generate code for formal verification, simulation and hardware synthesis from high-level specifications of high-performance embedded systems. Its effectiveness is demonstrated with the design of an embedded system for a multimedia application.
A benefit of model-driven engineering relies on the automatic generation of artefacts from high-level models through intermediary levels using model transformations. In such a process, the input must be well designed, and the model transformations should be trustworthy. Because of the specificities of models and transformations, classical software test techniques have to be adapted. Among these techniques, mutation analysis has been ported, and a set of mutation operators has been defined. However, it currently requires considerable manual work and suffers from the test data set improvement activity. This activity is a difficult and time-consuming job and reduces the benefits of the mutation analysis. This paper addresses the test data set improvement activity. Model transformation traceability in conjunction with a model of mutation operators and a dedicated algorithm allow to automatically or semi-automatically produce improved test models. The approach is validated and illustrated in two case studies written in Kermeta.kind of fault that could be introduced by programmers. The efficiency of a given test data set is then measured by its ability to highlight the fault injected in each mutated version (killing these mutants). If the proportion of killed mutants [3] is considered too low, it is necessary to improve the test data set [4].This activity corresponds to the modification of existing test data or the generation of new ones and is called test data set improvement. It is usually seen as the most time-consuming step. Experiments measure that the test data set initially provided by the tester often already detect 50% to 70% of the mutants as faulty [5]. However, several works state that improving the test set to highlight errors in 95% of mutants is difficult in most of the cases [6,7]. Indeed, each non-killed (i.e. alive) mutant must be analysed in order to understand why no test data reveals its injected fault, and consequently, the test data set has to be improved.This paper focuses on the test data set improvement of the mutation analysis process. It is dedicated to the test of model transformation. In this context, test data are models.Because of their intrinsic nature, model transformations rely on specific operations (e.g. data collection in a typed graph or collection filtering) that rarely occur in traditional programming. In addition, many different dedicated languages exist to implement model transformation. Thus, the mutation analysis techniques used for traditional programming cannot be directly applied to model transformations; new challenges to model transformation testing are arising [8]. A set of mutation operators dedicated to model transformation has been previously introduced [9]. This paper tackles the problematic of the test model set improvement by automatically considering mutation operators.Tools and heuristics are provided to assist the creation of new test models. The approach proposed in this paper relies on a high level representation of the mutation operators and a traceability ...
As technology scales for increased circuit density and performance, the management of power consumption in system-on-chip (SoC) is becoming critical. Today, having the appropriate electronic system level (ESL) tools for power estimation in the design flow is mandatory. The main challenge for the design of such dedicated tools is to achieve a better tradeoff between accuracy and speed. This paper presents a consumption estimation approach allowing taking the consumption criterion into account early in the design flow during the system cosimulation. The originality of this approach is that it allows the power estimation for both white-box intellectual properties (IPs) using annotated power models and black-box IPs using standalone power estimators. In order to obtain accurate power estimates, our simulations were performed at the cycle-accurate bit-accurate (CABA) level, using SystemC. To make our approach fast and not tedious for users, the simulated architectures, including standalone power estimators, were generated automatically using a model driven engineering (MDE) approach. Both annotated power models and standalone power estimators can be used together to estimate the consumption of the same architecture, which makes them complementary. The simulation results showed that the power estimates given by both estimation techniques for a hardware component are very close, with a difference that does not exceed 0.3%. This proves that, even when the IP code is not accessible or not modifiable, our approach allows obtaining quite accurate power estimates that early in the design flow thanks to the automation offered by the MDE approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.