2018 International Conference of Electrical and Electronic Technologies for Automotive 2018
DOI: 10.23919/eeta.2018.8493233
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Posit Arithmetic for Deep Neural Networks in Autonomous Driving Applications

Abstract: This paper discusses the introduction of an integrated Posit Processing Unit (PPU) as an alternative to Floating-point Processing Unit (FPU) for Deep Neural Networks (DNNs) in automotive applications. Autonomous Driving tasks are increasingly depending on DNNs. For example, the detection of obstacles by means of object classification needs to be performed in real-time without involving remote computing. To speed up the inference phase of DNNs the CPUs on-board the vehicle should be equipped with co-processors,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
23
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

4
5

Authors

Journals

citations
Cited by 31 publications
(23 citation statements)
references
References 13 publications
0
23
0
Order By: Relevance
“…The work demonstrates that DNN inference using 7-bit posits endures <1% accuracy degradation on ImageNet classification using AlexNet and that posits have a 30% less ravenous memory footprint than fixedpoint for multiple DNNs while maintaining a <1% drop in accuracy. Cococcioni et al review the effectiveness of posits for autonomous driving functions [6]. A discussion of a posit processing unit as an alternative to a floating point processing unit develops into an argument for posits as they exhibit a better trade-off between accuracy and implementation complexity.…”
Section: Related Workmentioning
confidence: 99%
“…The work demonstrates that DNN inference using 7-bit posits endures <1% accuracy degradation on ImageNet classification using AlexNet and that posits have a 30% less ravenous memory footprint than fixedpoint for multiple DNNs while maintaining a <1% drop in accuracy. Cococcioni et al review the effectiveness of posits for autonomous driving functions [6]. A discussion of a posit processing unit as an alternative to a floating point processing unit develops into an argument for posits as they exhibit a better trade-off between accuracy and implementation complexity.…”
Section: Related Workmentioning
confidence: 99%
“…This topic is also the core of the automotive stream in the H2020 European Processor Initiative (embedded HPC for autonomous driving with BMW as main technology end-user [9,10]) funding this work. To address the above issues new computing arithmetic styles are appearing in research [11][12][13][14][15][16][17][18][19][20] overcoming the classic fixed-point (INT) vs. IEEE-754 floating-point duality in case of embedded DNN (Deep Neural Networks) signal processing. Just as an example, Intel is proposing BFLOAT16 (Brain Floating Point), that has same number of exponent bits of the single-precision floating point allowing in this way to replace binary32 in practical uses although with less precision.…”
Section: Introductionmentioning
confidence: 99%
“…Quantized neural networks are proposed in [18], where using data sets such as MNIST, CIFAR-10, and ImageNet, weights and activations are reduced to 1-bit or 2-bit but the top-1 accuracy is limited to 51%. Recently, a novel way to represent real numbers, called Posit, has been proposed [19,20]. Basically, the Posit format can be thought as a compressed floating-point representation, where more mantissa bits are used for small numbers, and less mantissa bits for large numbers, within a fixedlength format (the exponent bits adapt accordingly, to maintain [19,20] is depicted in Fig.…”
Section: Introductionmentioning
confidence: 99%
“…The work demonstrates that, with <1% accuracy degradation, DNN parameters can be represented using 7-bit posits for AlexNet on the ImageNet corpus and that posits require ∼30% less memory utilization for the LeNet, ConvNet, and AlexNet neural networks in comparison to the fixed-point format. Secondly, Cococcioni et al [22] discuss the effectiveness of posit arithmetic for application to autonomous driving. They consider an implementation of the Posit Processing Unit (PPU) as an alternative to the Floating point Processing Unit (FPU) since the self-driving car standards require 16-bit floating point representations for the safety-critical application.…”
Section: Related Workmentioning
confidence: 99%