2009
DOI: 10.1145/1509288.1509293
|View full text |Cite
|
Sign up to set email alerts
|

Memory allocation for embedded systems with a compile-time-unknown scratch-pad size

Abstract: This paper presents the first memory allocation scheme for embedded systems having a scratch-pad memory(SPM) whose size is unknown at compile-time. All existing memory allocation schemes for SPM require the SPM size to be known at compile-time; therefore tie the resulting executable to that size of SPM and not portable to other platforms having different SPM sizes. As size-portable code is valuable in systems supporting downloaded codes, our work presents a compiler method whose resulting executable is portabl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2009
2009
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 26 publications
0
9
0
Order By: Relevance
“…In this topic, a few research works about reconfigurable computing do introduce techniques for managing the memory and reducing the number of data transfers [22,3,29]. Still, the most active researches are about scratch-pad memory [18,16,27] or GPUs [2,6]. These approaches usually support single or shared memory organizations, and have various contributions like compile-time or operating-system-based allocation and copy policies [18,27,29], new memory allocators [16], or schedule-based optimizations for reducing the cost of data transfers [22,2,6].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In this topic, a few research works about reconfigurable computing do introduce techniques for managing the memory and reducing the number of data transfers [22,3,29]. Still, the most active researches are about scratch-pad memory [18,16,27] or GPUs [2,6]. These approaches usually support single or shared memory organizations, and have various contributions like compile-time or operating-system-based allocation and copy policies [18,27,29], new memory allocators [16], or schedule-based optimizations for reducing the cost of data transfers [22,2,6].…”
Section: Related Workmentioning
confidence: 99%
“…Still, the most active researches are about scratch-pad memory [18,16,27] or GPUs [2,6]. These approaches usually support single or shared memory organizations, and have various contributions like compile-time or operating-system-based allocation and copy policies [18,27,29], new memory allocators [16], or schedule-based optimizations for reducing the cost of data transfers [22,2,6]. They are complementary to our approach since it supports run-time decisions and targets a distributed memory organization where the datapath tiles can only access their local memories.…”
Section: Related Workmentioning
confidence: 99%
“…At the boundary between static and dynamic allocation, [6] performs a load time optimization that places the stack data in one of the memories using information computed at compile time, but taking into account the size of the memory at runtime. Compared to this approach, we manage both heap and stack data.…”
Section: Background and Related Workmentioning
confidence: 99%
“…During the execution, the different regions were scheduled to different cores. In further work in [19], the lengths of sub-loop divided were limited to balance the overloads of different cores for better performance of the whole system. The work in [20] proposed a complete share approach.…”
Section: Introductionmentioning
confidence: 99%