Proceedings of the 1995 International Symposium on Low Power Design - ISLPED '95 1995
DOI: 10.1145/224081.224093
|View full text |Cite
|
Sign up to set email alerts
|

Cache design trade-offs for power and performance optimization

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
152
0

Year Published

2001
2001
2013
2013

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 249 publications
(153 citation statements)
references
References 17 publications
1
152
0
Order By: Relevance
“…Such overhead is lower for the large L2 cache because its data arrays are larger, and thus, the relative cost of ROM cells is lower. If wordline partitioning [10,23] was used the area cost would raise, but in any reasonable configuration the area overhead would remain well below 5%. Nevertheless, the area cost can be reduced by reducing the size of the signatures at the expense of some aliasing as explained before.…”
Section: Resultsmentioning
confidence: 99%
“…Such overhead is lower for the large L2 cache because its data arrays are larger, and thus, the relative cost of ROM cells is lower. If wordline partitioning [10,23] was used the area cost would raise, but in any reasonable configuration the area overhead would remain well below 5%. Nevertheless, the area cost can be reduced by reducing the size of the signatures at the expense of some aliasing as explained before.…”
Section: Resultsmentioning
confidence: 99%
“…Thus, an L-cache can hold only a pre-specified and limited number of loops. Line buffers are essentially degenerate L0/filter caches that contain only a single line [19]. The cache access latency of a line buffer is typically prolonged such that a line buffer miss will trigger the corresponding fetch from the L1-IC during the same processor clock cycle.…”
Section: Related Workmentioning
confidence: 99%
“…Research has also been done in trade-offs between the cache parameters. [3] gives a case study of cache design trade-offs for power and performance optimizations. Since [5], the focus of research has shifted to run-time reconfiguration methods with on-demand-performance as the keyword.…”
Section: Recent Advancesmentioning
confidence: 99%
“…Current research is being focused on energy efficient cache architectures ( [1], [3], [5], [9]) and new reconfigurable caching techniques ( [2], [4], [8]). Since circuit level techniques are not able to single handedly provide solutions for achieving the above mentioned ends, higher levels of abstraction namely Algorithmic and Architectural levels [7] are being looked at with increasing interest.…”
Section: Introductionmentioning
confidence: 99%