Investigating the geometric structure of neural activation spaces with convex hull approximations

Neurocomputing(2022)

引用 2|浏览67
暂无评分
摘要
Neural networks have achieved great success in many tasks, including data classification and pattern recognition. However, how neural networks work and what representations they learn are still not fully understood. For any data sample fed into a neural network, we wondered how its corresponding vectors expanded by activated neurons change throughout the layers and why the final output vector could be classified or clustered. To formally answer these questions, we define the data sample outputs of each layer as activation vectors and the space expanded by them as the activation space. Then, we investigate the geometric structure of the high-dimensional activation spaces of neural networks by studying the geometric characters of the massive activation vectors through approximated convex hulls. We find that the different layers of neural networks have different roles, where the former and latter layers can disperse and gather data points, respectively. Moreover, we also propose a novel classification method based on the geometric structures of activation spaces, called nearest convex hull (NCH) classification, for the activation vectors in each layer of a neural network. The empirical results show that the geometric structure can indeed be utilized for classification and often outperforms original neural networks. Finally, we demonstrate that the relationship among the convex hulls of different classes could be a good metric to help us optimize neural networks in terms of over-fitting detection and network structure simplification.
更多
查看译文
关键词
Neural networks,Activation space understanding,Convex hull
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要