SRQ: Self-Reference quantization scheme for lightweight neural network

2021 Data Compression Conference (DCC)(2021)

引用 0|浏览12
暂无评分
摘要
Lightweight neural network (LNN) nowadays plays a vital role in embedded applications with limited resources. Quantized LNN with a low bit precision is an effective solution, which further reduces the computational and memory resource requirements. However, it is still challenging to avoid the significant accuracy degradation compared with the heavy neural network due to its numerical approximatio...
更多
查看译文
关键词
Degradation,Quantization (signal),Perturbation methods,Neural networks,Redundancy,Memory management,Data compression
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要