2019
DOI: 10.1587/elex.15.20180909
|View full text |Cite
|
Sign up to set email alerts
|

Short floating-point representation for convolutional neural network inference

Abstract: Convolutional neural networks (CNNs) are being widely used in computer vision tasks, and there have been many efforts to implement CNNs in ASIC or FPGA for power-hungry environments. Instead of the previous common representation, the fixed-point representation, this letter proposes a short floating-point representation for CNNs. The short floating-point representation is based on the normal floating-point representation, but has much less width and does not have complex cases like Not-a-Number and infinity cas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 15 publications
0
10
0
Order By: Relevance
“…HOBFLOPS generates efficient software emulation parallel FP arithmetic units optimized using hardware synthesis tools. HOBFLOPS investigates reduced complexity FP, [19,17] that is more efficient than fixed-point [21]. HOBFLOPS considers alternative FP formats [23] and register packing with bit-sliced arithmetic [24].…”
Section: Approachmentioning
confidence: 99%
“…HOBFLOPS generates efficient software emulation parallel FP arithmetic units optimized using hardware synthesis tools. HOBFLOPS investigates reduced complexity FP, [19,17] that is more efficient than fixed-point [21]. HOBFLOPS considers alternative FP formats [23] and register packing with bit-sliced arithmetic [24].…”
Section: Approachmentioning
confidence: 99%
“…Due to their excellent performance, deep learning methods have gained great interests and are successfully introduced in many fields [20][21][22][23][24]. Stacked autoencoders (SAE) [25] and deep belief network (DBN) [26] are employed to learn features directly from original signals for fault diagnosis of analog circuits.…”
Section: Introductionmentioning
confidence: 99%
“…The RUL estimation based on the physical failure model is ineffective when dealing with large and complex nonlinear multi-operation equipment systems, while the data-driven method is the leading research direction of RUL estimation in recent years [4]. These data-driven methods involve support vector machines [5,6,7], neural networks [8,9,10] and particle filters(PF) [11,12,13,14,15]. When dealing with nonlinear non-Gaussian noise systems, the PF technique has excellent performance [16].…”
Section: Introductionmentioning
confidence: 99%