2019
DOI: 10.1002/ett.3747
|View full text |Cite
|
Sign up to set email alerts
|

An expandable topology with low wiring congestion for silicon interposer‐based network‐on‐chip systems

Abstract: In 2.5D stacking technology, multiple chips have stacked side-by-side on a silicon interposer layer. The network-on-chip in the central processing unit (CPU) layer makes it possible to connect processing cores to each other. The interposer layer prepares the connection between the CPU cores and other chips such as memory chip. The memory chip usually contains several segments stacked vertically. The network-on-chip can be extended to the interposer layer to make use of unused routing resources of the interpose… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Placing on the two horizontal sides, more memory stacks are integrated through the silicon interposer area. For this reason, higher capacities and higher bandwidth are achievable compared to 3D technology [5][6][7]. In this figure, the differences between 2.5D and 3D technologies have been shown.…”
Section: Introductionmentioning
confidence: 92%
See 2 more Smart Citations
“…Placing on the two horizontal sides, more memory stacks are integrated through the silicon interposer area. For this reason, higher capacities and higher bandwidth are achievable compared to 3D technology [5][6][7]. In this figure, the differences between 2.5D and 3D technologies have been shown.…”
Section: Introductionmentioning
confidence: 92%
“…In Figure 1(a), it may be seen that memory blocks are located on the silicon interposer and connected with the interposer layer through interface nodes. There are also two disparate types of traffic: the first one is the core-to-core traffic associated with the interaction between cores and the second is the core-to-memory traffic, which is responsible for transferring packets between memory blocks and cores [1,6]. Core-to-core traffic, which is known for coherence traffic, should be distributed across all the networks in order to avoid protocol-level deadlock [1].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation