Classical redundancy-based fault detection techniques, such as Duplication with Comparison (DWC), rely on replicating the computation and comparing the replicas' output at a bit-wise granularity. In many application environments these costs are prohibitive, especially when applications are characterized by an intrinsic level of tolerance. This paper presents a novel fault-detection approach for the specific context of image filtering. Peculiarity of the proposed approach is that it estimates the impact of the fault on the processed output, in order to determine whether the image is usable or should be re-processed. To limit overheads, the proposed solution exploits Approximate Computing (AC), allowing the definition of disciplined AC strategies to trade-off between accuracy and costs. Core of our solution is the successful combination of Image Quality Assessment metrics and Machine Learning models to assess the visual impact of the fault in a lightweight manner. Extensive experimental campaigns demonstrate the effectiveness of the solution, achieving achieving a reduction in terms of execution time up to 44% with respect to the classical DWC, with a fault detection precision ranging from 94.58% to 96.70%, and recall ranging from 88.2% to 97.8%, depending on the adopted level of approximation.