2011 International Conference on Computational Science and Its Applications 2011
DOI: 10.1109/iccsa.2011.46
|View full text |Cite
|
Sign up to set email alerts
|

A Fast Lock-Free User Memory Space Allocator for Embedded Systems

Abstract: Many embedded systems get improvements on hardware such as massive memory and multi-cores. According these improvements, some application that demands performance of excessive operations per seconds has been appeared. These applications often use dynamic memory allocation. But, existing allocators does not scale well, thus those applications is limited theirs performance by allocators. Moreover, because the applications that run on embedded systems are rarely powered-off, the external fragmentation problem is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…This solution is in fact based on pre-reserving memory to be delivered to specific threads (or CPU-cores), and resorts to lock-based coordination across the threads whenever the pre-reserved memory is fully used by a thread and the global state of the memory allocator needs to be changed in order to provide a new pre-reserved area. Similar approaches where threads operate on pre-partitioned heaps-hence on different memoryallocator instances-have been presented in [15], [16], [17].…”
Section: Related Workmentioning
confidence: 99%
“…This solution is in fact based on pre-reserving memory to be delivered to specific threads (or CPU-cores), and resorts to lock-based coordination across the threads whenever the pre-reserved memory is fully used by a thread and the global state of the memory allocator needs to be changed in order to provide a new pre-reserved area. Similar approaches where threads operate on pre-partitioned heaps-hence on different memoryallocator instances-have been presented in [15], [16], [17].…”
Section: Related Workmentioning
confidence: 99%
“…This solution is in fact based on pre-reserving memory to be delivered to specific threads (or CPU-cores), and resorts to lock-based coordination across the threads whenever the pre-reserved memory is fully used by a thread and the global state of the memory allocator needs to be changed in order to provide a new pre-reserved area. Similar approaches where threads operate on pre-partitioned heaps-hence on different memoryallocator instances-have been presented in [15], [16], [17]. Similarly to the previous work, these proposal still does not address the problem of avoiding blocking allocations/releases in scenarios where a same allocator instance can be concurrently accessed by multiple threads.…”
Section: Related Workmentioning
confidence: 99%