Fast and Low-Power Quantized Fixed Posit High-Accuracy DNN Implementation

IEEE Transactions on Very Large Scale Integration (VLSI) Systems(2022)

Cited 5|Views4
No score
Abstract
This brief compares quantized float-point representation in posit and fixed-posit formats for a wide variety of pre-trained deep neural networks (DNNs). We observe that fixed-posit representation is far more suitable for DNNs as it results in a faster and low-power computation circuit. We show that accuracy remains within the range of 0.3% and 0.57% of top-1 accuracy for posit and fixed-posit quan...
More
Translated text
Key words
Quantization (signal),Standards,Dynamic range,Very large scale integration,Program processors,Deep learning,Convolutional neural networks
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined