This paper aims to exploit approximate computing units in image processing systems and artificial neural networks. For this purpose, a general design methodology is introduced, and approximationoriented architectures are developed for different applications. This paper proposes a method to compromise power/area efficiency of circuit-level design with accuracy supervision of system-level design. The proposed method selects approximate computational units that minimize the total computation cost, yet maintaining the ultimate performance. This is accomplished by formulating a linear programming problem, which can be solved by conventional linear programming solvers. Approximate computing units, such as multipliers, neurons, and convolution kernels, which are proposed by this paper, are suitable for power/area reduction through accuracy scaling. The formulation is demonstrated on applications in image processing, digital filters, and artificial neural networks. This way, the proposed technique and architectures are tested with different approximate computing units, as well as system-level requirement metrics, such as PSNR and classification performance. INDEX TERMS Approximate computing, artificial neural networks, field programmable gate arrays, high-level synthesis, image processing.