谷歌浏览器插件
订阅小程序
在清言上使用

Encoding Involutory Invariances in Neural Networks

IEEE International Joint Conference on Neural Network (IJCNN)(2022)

引用 2|浏览25
暂无评分
摘要
In certain situations, neural networks are trained upon data that obey underlying symmetries. However, the predictions do not respect the symmetries exactly unless embedded in the network structure. In this work, we introduce architectures that embed a special kind of symmetry namely, invariance with respect to involutory linear/affine transformations up to parity p = +/- 1. We provide rigorous theorems to show that the proposed network ensures such an invariance and present qualitative arguments for a special universal approximation theorem. An adaption of our techniques to CNN tasks for datasets with inherent horizontal/vertical reflection symmetry is demonstrated. Extensive experiments indicate that the proposed model outperforms baseline feed-forward and physics-informed neural networks while identically respecting the underlying symmetry.
更多
查看译文
关键词
neural networks,symmetries,invariances,universal approximation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要