2016
DOI: 10.1007/s11241-016-9250-7
|View full text |Cite
|
Sign up to set email alerts
|

Early execution time-estimation through automatically generated timing models

Abstract: Traditional timing analysis, such as worst-case execution time analysis, is normally applied only in the late stages of embedded system software development, when the hardware is available and the code is compiled and linked. However, preliminary timing estimates are often needed in early stages of system development as an essential prerequisite for the configuration of the hardware setup and dimensioning of the system. During this phase the hardware is often not available, and the code might not be ready to l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
21
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(21 citation statements)
references
References 26 publications
0
21
0
Order By: Relevance
“…In [17], authors use linear regression for calculating timings but they use a set of specially crafted training program to identify instruction costs of an abstract machine. Authors try to capture effects of cache, pipeline and code optimization by crafting examples with longer instruction sequences and loops.…”
Section: Related Workmentioning
confidence: 99%
“…In [17], authors use linear regression for calculating timings but they use a set of specially crafted training program to identify instruction costs of an abstract machine. Authors try to capture effects of cache, pipeline and code optimization by crafting examples with longer instruction sequences and loops.…”
Section: Related Workmentioning
confidence: 99%
“…Model identification: Another line of work avoids solving the mapping problem for each program individually by deriving a source-level model once, and subsequently obtaining timing annotations from this model only [1]. The timing behavior is measured on a set of training programs, and then used to derive a timing model for source constructs.…”
Section: Related Workmentioning
confidence: 99%
“…Our contributions are as follows: (1) We compare the requirements and goals of timing annotations in VP and WCET, (2) we evaluate the applicability of VP methods in WCET applications, and propose ways to fix them and increase their precision, (3) we propose a generic instruction-to-source mapping algorithm for WCET analysis of simple processors, competitive to classic approaches, and (4) we discuss further synergy in both research communities.…”
Section: Introductionmentioning
confidence: 99%
“…WCET analysis has also been proposed at an intermediate level in between source code and machine code, similarly because of easier analysis of data and control flow. Altenbernd [2] et al developed an approximation of WCET which works on an ALF representation of the program, without analyzing the executable. They automatically identified a timing model of the intermediate instruction set through execution and measurement of training programs.…”
Section: Related Workmentioning
confidence: 99%
“…1 Overview of our WCET tools and their artifacts refinement invariants [23] and pattern matching [28]. (2) Existing approaches work at the machine code level, where the high-level information from the original program is hard to extract. Variables are distributed over multiple registers, type information is lost, loops and conditional statements can be implemented in many different ways, and indirect addressing can make it close to impossible to track data flows and function calls.…”
Section: Introductionmentioning
confidence: 99%