Proceedings of the 2012 International Conference on Compilers, Architectures and Synthesis for Embedded Systems 2012
DOI: 10.1145/2380403.2380435
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting level-0 caches in embedded processors

Abstract: Level-0 (L0) caches have been proposed in the past as an inexpensive way to improve performance and reduce energy consumption in resource-constrained embedded processors. This paper proposes new L0 data cache organizations using the assumption that an L0 hit/miss determination can be completed prior to the L1 access. This is a realistic assumption for very small L0 caches that can nevertheless deliver significant miss rate and/or energy reduction. The key issue for such caches is how and when to move data betw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
16
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(17 citation statements)
references
References 28 publications
1
16
0
Order By: Relevance
“…2. The main architecture for our WF cache is similar to the ones previously proposed [10,11]. These techniques are mainly for performance improvement and energy reduction by exploiting the fast hit latency and low access energy of the filter cache.…”
Section: Architecturementioning
confidence: 99%
See 2 more Smart Citations
“…2. The main architecture for our WF cache is similar to the ones previously proposed [10,11]. These techniques are mainly for performance improvement and energy reduction by exploiting the fast hit latency and low access energy of the filter cache.…”
Section: Architecturementioning
confidence: 99%
“…In this paper, we propose two different line allocation policies for the WF cache: WF_RD and WF_WR. The WF_RD policy is similar to the allocation policy of Hit Cache [10]. The WF_RD policy does not allocate the cache line to the WF cache along with the block (line) fill in the L1 data cache (i.e., when there is a cache miss).…”
Section: Wf Cache Line Allocation Policymentioning
confidence: 99%
See 1 more Smart Citation
“…A majority of references are thus serviced by the larger L1D cache, and consequently, the relatively shorter latency and energy per access of the assist cache remain unexploited. Duong's L0 scheme (I1P101 in particular) [2] and the HitME cache [5] are examples of cache organizations that service a majority of data references through an assist placed alongside the L1D. In these proposals, any cache line that is referenced atleast once in the L1D is immediately moved to the assist.…”
Section: Related Workmentioning
confidence: 99%
“…In this structure, a predictor is added to the filter cache system and the predictor selects the access path between L1 cache and filter cache. In addition, new policies about cache placement have been introduced to increase hit rate of L1 cache or filter cache [5], [6]. According to the policies, a cache line is inserted into either L1 cache or filter cache on a cache miss.…”
Section: Introductionmentioning
confidence: 99%