This paper examines the phenomenon of congestion in order to better understand the congestion management techniques that will be needed in high-speed, cell-based networks. The first step of this study is to use high time-resolution local area network (LAN) traffic data to explore the nature of LAN traffic variability. Then, we use the data for a trace-driven simulation of a connectionless service that provides LAN interconnection. The simulation allows us to characterize what congestion might look like in a high-speed, cell-based network. The most striking aspect of the LAN data is the extreme traffic variability on time scales ranging from milliseconds to months. Conventional traffic models do not capture this behavior, which has a profound impact on the nature of traffic congestion. When our realistic data is applied to simple models of LAN interconnection, we observe that: During congested periods, congestion persists and losses can be significant. Congestion losses cannot be avoided by modest increases in buffer capacity. The consequences of misengineering can be serious. Fortunately, most congested periods are preceded by signs of impending danger.