ICCAD-2005. IEEE/ACM International Conference on Computer-Aided Design, 2005.
DOI: 10.1109/iccad.2005.1560207
|View full text |Cite
|
Sign up to set email alerts
|

A cache-defect-aware code placement algorithm for improving the performance of processors

Abstract: Yield improvement through exploiting fault-free sections of defective chips is a well-known technique [1][2]. The idea is to partition the circuitry of a chip in a way that faultfree sections can function independently. Many fault tolerant techniques for improving the yield of processors with a cache memory have been proposed [3][4][5]. In this paper, we propose a defect-aware code placement technique which offsets the performance degradation of a processor with a defective cache memory. To the best of our kno… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
11
0

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…Shirvani et al [21] describe a programmable address decoder that redirects accesses to slow/faulty cache lines to other lines in the same cache-set. Another simpler technique to bypass slow cache lines [10] uses an unused combination of flag bits to mark the cache-line as faulty and to eliminate it from normal operation of the cache. These are not applicable to general SRAM memories other than caches.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Shirvani et al [21] describe a programmable address decoder that redirects accesses to slow/faulty cache lines to other lines in the same cache-set. Another simpler technique to bypass slow cache lines [10] uses an unused combination of flag bits to mark the cache-line as faulty and to eliminate it from normal operation of the cache. These are not applicable to general SRAM memories other than caches.…”
Section: Related Workmentioning
confidence: 99%
“…In cache memories, several previous works address improving timing-yield in presence of process variation by proposing process-tolerant cache architectures [1], [21] and codeplacement compiler techniques [10], but they actually reduce the useful capacity of the cache by marking and avoiding to use tooslow cache lines. Although [10] provides a solution to mitigate the performance impact, it demands a per-chip different binary executable. Other highly-cited works exist to reduce cache static power [2][12] [6], but they do not consider process-variation.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the study still relies on the execution of random maps which are generated by means of Monte Carlo method. Finally, there are several other studies [1], [21], [20], [10], [11] looking into the impact of faults over caches using random maps.…”
Section: Related Workmentioning
confidence: 99%
“…Previous block disabling-based studies (such as [22], [19], [21], [13], [12], [10], [20], [11]) rely on the use of an arbitrary number (small or large) of random fault-maps. Each random fault-map indicates faulty cache cell locations and determines the disabled faulty cache blocks.…”
Section: Introductionmentioning
confidence: 99%