There is continuous intensive research on image compression techniques in wireless sensor networks (WSNs) in the literature. Some of the image compression techniques in WSNs that exist in the literature include discrete cosine transform (DCT), discrete waveform transforms (DWT), set partitioning in a hierarchical tree (SPIHT), and embedded zero tree wavelet (EZW) coding. Research on image compression in WSNs is necessitated by the need to improve the energy efficiency of sensor nodes and WSNs' lifetimes without compromising the quality of the reconstructed data. Several approaches have been developed centered around image compression and other factors in trying to limit the energy consumption of sensor nodes. Most of these approaches do not provide the error-bound mechanism that balances the rate of compression and distortion of the reconstructed image. Therefore, in this paper, a review and analysis of image compression techniques and approaches in WSNs are conducted. Available image compression approaches in WSNs in literature were then classified according to the image compression technique adopted, and their strengths and weaknesses were highlighted. In addition, a rate-distortion balanced data compression algorithm with error bound mechanism based on artificial neural networks (ANN) in the form of an autoencoder (AE) was coded and simulated in MATLAB before being evaluated and compared to the conventional approaches. The experimental results show that the simulated algorithm has less root mean square error (RMSE) and a higher coefficient of determination (𝑅 2 ) values on variable compression ratios as compared to the Principal Component Analysis (PCA), Discrete Cosine Transform, and Fast Fourier Transform (FFT) when using the Grand-St-Bernard metrological dataset. Furthermore, it presented less RMSE, and higher compression ratio values compared to the Lightweight Temporal Compression (LTC) algorithm on variable error bounds when using the LUCE metrological dataset. Therefore, it was found that the simulated algorithm presents better compression fidelity as compared to the conventional approaches without an error-bound mechanism. Moreover, the algorithm analyzed presents a significant approach to balancing the compression ratio and reconstructed data quality through its error-bound mechanism.