谷歌浏览器插件
订阅小程序
在清言上使用

LB-CNN: Convolutional Neural Network with Latent Binarization for Large Scale Multi-class Classification

2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA)(2020)

引用 2|浏览2
暂无评分
摘要
Convolutional Neural Networks (CNNs) demonstrate state of the art performance in large scale multi-class image classification tasks. CNNs consist of convolution layers that progressively construct features and a classification layer. Typically, a softmax function is used in the classification layer to learn joint probabilities for the classes, which are subsequently used for class prediction. We refer to such an approach as the joint approach to multi-class classification. There exists another approach in the literature which determines the multi-class prediction outcome through a sequence of binary decisions, and is christened the class binarization approach. A popular type of class binarization is Error Correcting Output Codes (ECOC). In this paper, we propose to incorporate ECOC into CNNs by inserting a latent-binarization layer in a CNN's classification layer. This approach encapsulates both encoding and decoding steps of ECOC into a single CNN capable of discovering an optimal coding matrix during training. The latent-binarization layer is motivated by the family of latent-trait and latent-class models used in behavioral research. We call the proposed CNNs with Latent Binarization as LB-CNNs, and develop algorithms combining EM and back-propagation to train LB-CNNs. The proposed models and algorithms are applied to several image recognition tasks, producing excellent results. Furthermore, LB-CNNs can also enhance the interpretability of the decision process of CNNs.
更多
查看译文
关键词
Binarization,Multi-class classification,Convolutional neural networks,Error correcting output codes,latent
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要