Abstract-Delay spikes on Internet paths can cause spurious TCP timeouts leading to significant throughput degradation. However, if TCP is too slow to detect that a retransmission is necessary, it can stay idle for a long time instead of transmitting. The goal is to find a Retransmission Timeout (RTO) value that balances the throughput degradation between both of these cases. In the current TCP implementations, RTO is a function of the Round Trip Time (RTT) alone. We show that the optimal RTO that maximizes the TCP throughput need to depend also on the TCP window size. Intuitively, the larger the TCP window size, the longer the optimal RTO. We derive the optimal RTO for several RTT distributions. An important advantage of our algorithm is that it can be easily implemented based on the existing TCP timeout mechanism.