Proceedings of the Tenth International Symposium on Hardware/Software Codesign - CODES '02 2002
DOI: 10.1145/774789.774805
|View full text |Cite
|
Sign up to set email alerts
|

Scratchpad memory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
35
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 459 publications
(35 citation statements)
references
References 6 publications
0
35
0
Order By: Relevance
“…Consider again the example of Figure 2(b) where a[0] and a [2] are received in-order but a [1] is delayed.…”
Section: Push Vs Pull Addressingmentioning
confidence: 99%
See 3 more Smart Citations
“…Consider again the example of Figure 2(b) where a[0] and a [2] are received in-order but a [1] is delayed.…”
Section: Push Vs Pull Addressingmentioning
confidence: 99%
“…While these different accelerator propositions can achieve very significant energy and/or performance gains, the corresponding studies and designs are focused on the computational aspects of the accelerator, less so on its interface with the cache or memory system. However, there are several reasons why accelerator memory interface should receive greater attention: (1) unlike for most GPGPU applications, ASICs and CGRAs can be used to map applications with complex control flow, but the resulting memory access patterns can be very irregular, so that such accelerators cannot be plugged to traditional scratchpads, they must be plugged to caches, just like processors (a typical system organization would be processors and accelerators each plugged to private L1s, with shared L2s), (2) as accelerators reduce the energy spent in computations, the fraction of energy spent accessing memory will comparatively increase, a kind of Amdahl's law effect on energy, and (3) one of the key assets of accelerators is their reduced area, so one should take care that this area advantage is not outweighed by an over-sized memory interface.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Although cached systems are already a standard for desktop machines, the hardware complexity required to keep the data history and maintain the coherency is extremely high, thus often unsuitable for many embedded applications. A more effective way of handling data transfers in embedded systems is that of coupling scratchpad memories with direct memory access (DMAs) controllers: It has been demonstrated that scratchpad memories are more energy efficient than caches for embedded applications [1]. However, programming requires the explicit configuration and trigger of memory transfers.…”
Section: Introductionmentioning
confidence: 99%