1987
DOI: 10.1109/jssc.1987.1052816
|View full text |Cite
|
Sign up to set email alerts
|

A 32-bit CMOS microprocessor with on-chip cache and TLB

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

1993
1993
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 4 publications
0
5
0
Order By: Relevance
“…In order to show the feasibility of the new CAM cell for use in a system, transient performance of the new CAM cell used in a translation lookaside buffer (TLB) [7] has been done. Fig.…”
Section: Performance and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to show the feasibility of the new CAM cell for use in a system, transient performance of the new CAM cell used in a translation lookaside buffer (TLB) [7] has been done. Fig.…”
Section: Performance and Discussionmentioning
confidence: 99%
“…Fig. 5 shows the brief critical path during the read access of TLB [7] using the new CAM cell with PD SOI DTMOS techniques. As shown in the figure, the ML is ANDed with a clock signal CLK2 to generate the word line for driving the corresponding SRAM cell-CLK2 serves to detect the miss/hit signal.…”
Section: Performance and Discussionmentioning
confidence: 99%
“…To solve this issue, we chose to implement the counter LRU algorithm [Kadota et al 1987] and program Chameleon microcode to perform replacement operations. This software-based LRU replacement mechanism will read the LRU state of the hit line and broadcast the outcome back to all PEs in the same column.…”
Section: C-mode: Virtualizing Idle Cores For Cachingmentioning
confidence: 99%
“…OMMERCIAL and academic microprocessor architec-C tures are increasingly incorporating caches on the processor chip itself to avoid off-chip latencies [3], [6], [8], [12]. These on-chip caches are currently small, but the trend is toward larger sizes to hide relatively slower off-chip memory speeds; thus, these chips devote an increasing portion of their area to the memory (tags and blocks) of the cache.…”
Section: Introductionmentioning
confidence: 99%
“…For each cache, Sohi reports the average miss ratio of several simulations with different fault patterns. He presents results for the number of faulty blocks ranging from 0% to 50% of the blocks in caches of three sizes (256, lK, and 8K bytes), three associativities (direct-mapped, two-way set-associative, and fully associative) and three block sizes (8,16, and 32 bytes).…”
Section: Introductionmentioning
confidence: 99%