Neural networks are a popular choice to accurately perform complex classification tasks. In edge applications, neural network inference is accelerated on embedded hardware platforms, which often utilise FPGA-based architectures, due to their low-power and flexible parallelism. A significant amount of applications require resilient hardware against faults, being compliant to safety standards. In this work, we present Selective TMR, an automated tool which analyses the sensitivity of computations within neural network inference to the overall network accuracy. The tool then triplicates the most sensitive computations, which increases the functional safety of the neural network accelerator without resorting to full triple modular redundancy (TMR). As a result, this allows designers to explore trade-off between accelerator reliability and hardware cost. In some cases, we see an improvement in 24% minimum accuracy under a single stuck@ hardware fault, while increasing the overall resource footprint by 56%.