2018
DOI: 10.14419/ijet.v7i2.8.10471
|View full text |Cite
|
Sign up to set email alerts
|

Linear convolution using UT Vedic multiplier

Abstract: Linear Convolution is one of the elemental operations of Signal processing systems and is used by some Multiplication Algorithms. In our project we perform Linear Convolution using ancient Multiplication Algorithm called UrdhvaTriyagbhyam (UT) which is one among the 16 sutras in Vedic mathematics. This provides best results in speed when compared to other multipliers. UrdhvaTriyagbhyam technique is used to increase the timing performance of the design. Our aim is to design 8 bit convolution using UT. The synth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Deep learning concepts for computer vision gave an idea of convolutional neural networks. Its main area of application includes image and video analysis [28]. Flattened image pixels, when used with a feed-forward network, give a bad performance with less accuracy.…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…Deep learning concepts for computer vision gave an idea of convolutional neural networks. Its main area of application includes image and video analysis [28]. Flattened image pixels, when used with a feed-forward network, give a bad performance with less accuracy.…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…Therefore, strong multipliers are used to get the square of a binary integer. Some of the request multipliers like Braun, Wallace tower, Dadda multiplier are used in most high-speed applications [9][10][11][12]; Booth multiplier is used in Baugh-Wooley 2's methods [4]. The Vedic multipliers are also evolving into the patterns other than traditional multipliers.…”
Section: Introductionmentioning
confidence: 99%
“…The Han-Carlson adder tolerant fault is described in detail. The Triple Modular Redundancy strategy is introduced to minimize the deterioration of the throughput in the presence of faulty adder [12][13][14][15].Due to its faster running, standard configuration and balanced loading in internal nodes compared to other parallel prefix adders, Han-Carlson adders are popular choice in high speed ALU architecture. Next they address the Han-Carlson adder model, activity that is important from the point of view of fault tolerance [16][17][18][19][20].Han-Carlson adder has a strong fanout trade off, number of logic levels and number of black cells.…”
Section: Introductionmentioning
confidence: 99%