Complex interconnects for server links pose a design challenge due to many components, interactions and tradeoffs. This paper examines the evolution, design choices, current challenges, and future directions of such high speed links in a scalable server setting.
I. IntroductionAs performance requirements for high-end servers increase, designers are faced with systems\ demands for high speed links to provide data transport within and across computation nodes for increased bandwidth. As new technologies such as hyper-threading and multi-core processors become more prevalent, this places even higher demands on the signaling and interconnect. Despite high losses and interconnect type transitions, the speed and range of copper electrical links have been continuously extended due to innovative designs. A careful balance and tradeoff between cost and performance at every level is needed to stay competitive in the high performance, high volume server market.While most standards for high speed links have provisions for simpler topologies such as agent to agent on the same board, across boards through a connector, in a backplane environment with two connectors, or in a simple cabled configuration [1], more complex topologies are rarely covered due to the vast number of design tradeoffs and interoperability issues. In Section II, this paper examines the evolution of high speed scalability link. Electrical signaling is covered in section III, followed by a description of the complex internal/external topologies in Section IV. Link design challenges are examined in Section V. While Section VI covers modeling and simulation, Section VII addresses testing and validation. Future directions are discussed in section VIII and results are summarized in Section IX.