Hardware such as multipliers and dividers is necessary for all electronic systems. This paper explores Vedic mathematics techniques for high-speed and low-area multiplication. In the study of multiplication algorithms, various bits-width ranges of the Anurupyena sutra are used. Parallelism is employed to address challenging problems in recent studies. Various designs have been developed for the FPGA implementation employing VLSI design approaches and parallel computing technology. Signal processing, machine learning, and recon gurable computing research should be closely monitored as arti cial intelligence develops. To enable deep learning algorithms, continued research should be done on energy-constrained computing technology. Multipliers and adders are key components of deep learning algorithms. The multiplier is an energy-intensive component of signal processing in ALU, Convolutional Neural Networks (CNN), and Deep Neural Networks (DNN). For the DNN, this method introduces the Booth multiplier blocks and the carry-save multiplier in the Anurupyena architecture. Traditional multiplication methods like the array multiplier, Wallace multiplier, and Booth multiplier are contrasted with the Vedic mathematics algorithms. On a speci c hardware platform, Vedic algorithms perform faster, use less power, and take up less space. Implementations were carried out using Verilog HDL and Xilinx Vivado 2019.1 on Kintex-7. The area and propagation delay were reduced compared to other multiplier architectures.