1992
DOI: 10.1109/22.146337
|View full text |Cite
|
Sign up to set email alerts
|

Minimization of delay and crosstalk in high-speed VLSI interconnects

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

1994
1994
2006
2006

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 38 publications
(10 citation statements)
references
References 20 publications
0
10
0
Order By: Relevance
“…From the above results, a recursive relationship for generating transmission line moments can be obtained as or (115) Convergence of (115), in practice requires 20-30 terms. It is to be noted that the convergence of the series represented by (110) can suffer, if for the first few terms grows quicker than .…”
Section: B Computation Of Momentsmentioning
confidence: 99%
“…From the above results, a recursive relationship for generating transmission line moments can be obtained as or (115) Convergence of (115), in practice requires 20-30 terms. It is to be noted that the convergence of the series represented by (110) can suffer, if for the first few terms grows quicker than .…”
Section: B Computation Of Momentsmentioning
confidence: 99%
“…It should be noted that the matrix to be inverted in (10) is exactly the transpose of the Jacobian matrix J in (8) used to solve the original DNN equations (1). Therefore, the LU factors of the Jacobian required to solve the ADNN are just the transpose of the LU factors used in solving the original DNN.…”
Section: Sensitivity For Dynamic Neural Networkmentioning
confidence: 99%
“…Therefore, the LU factors of the Jacobian required to solve the ADNN are just the transpose of the LU factors used in solving the original DNN. Because the J matrix in (8) has been built and decomposed into LU factors during the integration of the original DNN, the solution of the linearized ADNN equations (10) can be done without redoing the LU decomposition. In this way, the solution of the adjoint DNN can be achieved at an incremental computation effort once the original DNN is solved.…”
Section: Sensitivity For Dynamic Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations