Operating at reduced voltages offers substantial energy efficiency improvement but at the expense of increasing the probability of computational errors due to hardware faults. In this context, we targeted Deep Neural Networks (DNN) as energy hungry building blocks in embedded systems. Without an error feedback mechanism, blind voltage down-scaling will result in degraded accuracy or total system failure. In this paper, solutions based on the inherent properties of Self-Supervised Learning (SSL) and Algorithm Based Fault Tolerance (ABFT) techniques were investigated. A DNN model trained on MNIST data-set was deployed on a Field Programmable Gate Array (FPGA) that operated at reduced voltages and employed the proposed schemes. The SSL approach provides extremely lowoverhead fault detection at the cost of lower error coverage and extra training, while ABFT incurs less than 8% overheads at run-time with close to 100% error detection rate. By using the solutions, substantial energy savings, i.e., up to 48%, without compromising the accuracy of the model was achieved.