Delay and bandwidth-based alternatives to TCP congestion-control have been around for nearly three decades and have seen a recent surge in interest. However, such designs have faced significant resistance in being deployed on a wide-scale across the Internet-this has been mostly due to serious concerns about noise in delay measurements, pacing inter-packet gaps, and required changes to the standard TCP stack. With the advent of high-speed networking, some of these concerns become even more significant. This thesis considers Rapid, a recent proposal for ultra-high speed congestion control, which perhaps stretches each of these challenges to the greatest extent. Rapid adopts a framework of continuous finescale bandwidth probing and rate adapting. It requires finely-controlled inter-packet gaps, high-precision timestamping of received packets, and reliance on fine-scale changes in inter-packet gaps. While simulationbased evaluations of Rapid show that it has outstanding performance gains along several important dimensions, these will not translate to the real-world unless the above challenges are addressed. This thesis identifies the key challenges TCP Rapid faces on real high-speed networks, including deployability in standard protocol stacks, precise inter-packet gap creation, achieving robust bandwidth estimation in the presence of noise, and a stability/adaptability trade-off. A Linux implementation of Rapid is designed and developed after carefully considering each of these challenges. The evaluations on a 10Gbps testbed confirm that the implementation can indeed achieve the claimed performance gains, and that it would not have been possible unless each of the above challenges was addressed.