Searching Parallel Separating Hyperplanes for Effective Compression of Threshold Logic Networks

2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)(2019)

引用 5|浏览31
暂无评分
摘要
The threshold logic (TL) function, parameterized by a vector of weights and a threshold value, is an important class of Boolean functions that imitate neural information processing. When multiple TL functions are to be implemented in circuits or to be valuated through hardware acceleration, weight sharing among them may provide an effective way for circuit minimization or data compression. We study the condition for a set of TL functions to be implementable with a common weight vector, i.e., representable with parallel separating hyperplanes, and devise a new parameter compression technique. Experimental results demonstrate a 7-fold compression ratio for libraries of TL functions with up to 6 inputs and a data storage reduction to about 45% of the original parameter size for the depthwise convolution layers of an activation-binarized neural network aiming at CIFAR10 dataset classification.
更多
查看译文
关键词
searching parallel separating hyperplanes,effective compression,threshold logic networks,threshold logic function,Boolean functions,neural information processing,multiple TL functions,hardware acceleration,circuit minimization,common weight vector,parameter compression technique,data storage reduction,activation-binarized neural network,data compression,CIFAR10 dataset classification,depthwise convolution layers,7-fold compression ratio
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要