Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE), 2014 2014
DOI: 10.7873/date.2014.252
|View full text |Cite
|
Sign up to set email alerts
|

Parallel probe based dynamic connection setup in TDM NoCs

Abstract: We propose a Time-Division Multiplexing (TDM) based connection oriented NoC with a novel double time-wheel router architecture combined with a run-time parallel probing setup method. In comparison with traditional TDM connection setup methods, our design has the following advantages: (1) it allocates paths and time slots at run-time; (2) it is fast with predictable and bounded setup latency; (3) it avoids additional resources (no auxiliary network or central processor to find and manage connections); (4) it is… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…Some proposals suggest separate networks for packet and circuit switched messages: Palumbo et al determine if messages will use the packet or the circuit switched channels depending on their size [4]; Duato et al decide at compilation time whether a circuit between two nodes should be established based on expected communication patterns [5]; Abousamra et al use the requests to reserve circuits for the replies based on estimates of circuit utilization times [6]; Abousamra et al send a circuit reservation request as soon as a cache hit is detected, though this may not be enough to completely hide the circuit reservation latency [7]. Other authors implement a single network that supports both packet and circuit switching: Enright et al build circuits on demand and undo them when they conflict with another circuit [8]; Abousamra et al configure circuits periodically based on online communication statistics [9]; Kline et al reserve circuits on demand so that flits can traverse multiple hops in a single cycle [10]; Mazloumi et al reserve a circuit with the request and activate it with a probe message when the reply is ready [11]; Liu et al speed up circuit setup in TDM NoCs by sending parallel probes [12].…”
Section: State Of the Artmentioning
confidence: 99%
See 2 more Smart Citations
“…Some proposals suggest separate networks for packet and circuit switched messages: Palumbo et al determine if messages will use the packet or the circuit switched channels depending on their size [4]; Duato et al decide at compilation time whether a circuit between two nodes should be established based on expected communication patterns [5]; Abousamra et al use the requests to reserve circuits for the replies based on estimates of circuit utilization times [6]; Abousamra et al send a circuit reservation request as soon as a cache hit is detected, though this may not be enough to completely hide the circuit reservation latency [7]. Other authors implement a single network that supports both packet and circuit switching: Enright et al build circuits on demand and undo them when they conflict with another circuit [8]; Abousamra et al configure circuits periodically based on online communication statistics [9]; Kline et al reserve circuits on demand so that flits can traverse multiple hops in a single cycle [10]; Mazloumi et al reserve a circuit with the request and activate it with a probe message when the reply is ready [11]; Liu et al speed up circuit setup in TDM NoCs by sending parallel probes [12].…”
Section: State Of the Artmentioning
confidence: 99%
“…Most of those mechanisms establish circuits between nodes using dedicated networks or links [8,10,11,12] or at least need to send specific setup messages [4,5,7], and many of them need to wait for the circuit setup delay [14,12]. One of them introduces complexity at the network interfaces by forcing them to keep communication statistics [9].…”
Section: State Of the Artmentioning
confidence: 99%
See 1 more Smart Citation