Approximate Computing is a low power achieving technique that offers an additional degree of freedom to design digital circuits. Pruning is one of the types of approximate circuit design technique which removes logic gates or wires in the circuit to reduce power consumption with minimal insertion of error. In this work, a novel machine learning (ML) -based pruning technique is introduced to design digital circuits. The machine-learning algorithm of the random forest decision tree is used to prune nodes selectively based on their input pattern. In addition, an error compensation value is added to the original output to reduce an error rate. Experimental results proved the efficiency of the proposed technique in terms of area, power and error rate. Compared to conventional pruning, proposed ML pruning achieves 32% and 26% of the area and delay reductions in 8*8 multiplier implementation. Low power image processing algorithms are essential in various applications like image compression and enhancement algorithms. For real-time evaluation, proposed ML optimized pruning is applied in discrete cosine transform (DCT). It is a basic element of image and video processing applications. Experimental results on benchmark images show that proposed pruning achieves a very good peak signal-to-noise ratio (PSNR) value with a considerable amount of energy savings compared to other methods.
Approximate computing is a popular field for low power consumption that is used in several applications like image processing, video processing, multimedia and data mining. This Approximate computing is majorly performed with an arithmetic circuit particular with a multiplier. The multiplier is the most essential element used for approximate computing where the power consumption is majorly based on its performance. There are several researchers are worked on the approximate multiplier for power reduction for a few decades, but the design of low power approximate multiplier is not so easy. This seems a bigger challenge for digital industries to design an approximate multiplier with low power and minimum error rate with higher accuracy. To overcome these issues, the digital circuits are applied to the Deep Learning (DL) approaches for higher accuracy. In recent times, DL is the method that is used for higher learning and prediction accuracy in several fields. Therefore, the Long Short-Term Memory (LSTM) is a popular time series DL method is used in this work for approximate computing. To provide an optimal solution, the LSTM is combined with a meta-heuristics Jellyfish search optimisation technique to design an input aware deep learning-based approximate multiplier (DLAM). In this work, the jelly optimised LSTM model is used to enhance the error metrics performance of the Approximate multiplier. The optimal hyperparameters of the LSTM model are identified by jelly search optimisation. This fine-tuning is used to obtain an optimal solution to perform an LSTM with higher accuracy. The proposed pre-trained LSTM model is used to generate approximate design libraries for the different truncation levels as a function of area, delay, power and error metrics. The experimental results on an 8-bit multiplier with an image processing application shows that the proposed approximate computing multiplier achieved a superior area and power reduction with very good results on error rates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.