Designing routing schemes is a multidimensional and complex task that depends on the objective function, the computational model (centralized vs. distributed), and the amount of uncertainty (online vs. offline). Nevertheless, there are quite a few well-studied general techniques, for a large variety of network problems. In contrast, in our view, practical techniques for designing robust routing schemes are scarce; while fault-tolerance has been studied from a number of angles, existing approaches are concerned with dealing with faults after the fact by rerouting, selfhealing, or similar techniques. We argue that this comes at a high burden for the designer, as in such a system any algorithm must account for the effects of faults on communication.With the goal of initiating efforts towards addressing this issue, we showcase simple and generic transformations that can be used as a blackbox to increase resilience against (independently distributed) faults. Given a network and a routing scheme, we determine a reinforced network and corresponding routing scheme that faithfully preserves the specification and behavior of the original scheme. We show that reasonably small constant overheads in terms of size of the new network compared to the old are sufficient for substantially relaxing the reliability requirements on individual components. The main message in this paper is that the task of designing a robust routing scheme can be decoupled into (i) designing a routing scheme that meets the specification in a fault-free environment, (ii) ensuring that nodes correspond to fault-containment regions, i.e., fail (approximately) independently, and (iii) applying our transformation to obtain a reinforced network and a robust routing scheme that is fault-tolerant.likely that none of them fail. Existing designs and algorithms (that are considered practical) do account for lost messages and, in some cases, permanently crash-failing nodes or edges [CLM12, KKD10, PNK + 06].It is our understanding that handling stronger fault types is considered practically infeasible, be it in terms of complexity of implementations or the involved overheads. However, pretending that crash failures are the worst that can happen means that the entire system possibly fails whenever, e.g., we face a "babbling idiot" (i.e., a node erroneously generating many messages and congesting the network), excessive link delays (violating specification), or misrouting, corruption, or loss of messages. The current approach is to (i) use techniques like error correction, acknowledging reception, etc. to mask the effects of such faults, (ii) hope to detect and deactivate faulty components quickly (logically mapping faults to crashes), and (iii) repair or replace the faulty components after they have been taken offline. This strategy may result in significant disruption of applications; possible consequences include:(I) Severe delays in execution, as successful message delivery necessitates to detect and deactivate faulty components first. (II) Failure to deliver cor...