Outlier-Efficient Hopfield Layers for Large Transformer-Based Models

Jerry Yao-Chieh Hu, Pei-Hsuan Chang, Robin Luo, Hong-Yu Chen, Weijian Li, Wei-Po Wang,Han Liu

arxiv๏ผˆ2024๏ผ‰

ๅผ•็”จ 0|ๆต่งˆ0
ๆš‚ๆ— ่ฏ„ๅˆ†
ๆ‘˜่ฆ
We introduce an Outlier-Efficient Modern Hopfield Model (termed ๐™พ๐šž๐š๐™ด๐š๐š๐™ท๐š˜๐š™) and use it to address the outlier-induced challenge of quantizing gigantic transformer-based models. Our main contribution is a novel associative memory model facilitating outlier-efficient associative memory retrievals. Interestingly, this memory model manifests a model-based interpretation of an outlier-efficient attention mechanism (Softmax_1): it is an approximation of the memory retrieval process of ๐™พ๐šž๐š๐™ด๐š๐š๐™ท๐š˜๐š™. Methodologically, this allows us to debut novel outlier-efficient Hopfield layers a powerful attention alternative with superior post-quantization performance. Theoretically, the Outlier-Efficient Modern Hopfield Model retains and improves the desirable properties of the standard modern Hopfield models, including fixed point convergence and exponential storage capacity. Empirically, we demonstrate the proposed model's efficacy across large-scale transformer-based and Hopfield-based models (including BERT, OPT, ViT and STanHop-Net), benchmarking against state-of-the-art methods including ๐™ฒ๐š•๐š’๐š™๐š™๐šŽ๐š_๐š‚๐š˜๐š๐š๐š–๐šŠ๐šก and ๐™ถ๐šŠ๐š๐šŽ๐š_๐™ฐ๐š๐š๐šŽ๐š—๐š๐š’๐š˜๐š—. Notably, ๐™พ๐šž๐š๐™ด๐š๐š๐™ท๐š˜๐š™ achieves on average โˆผ22+% reductions in both average kurtosis and maximum infinity norm of model outputs accross 4 models.
ๆ›ดๅคš
ๆŸฅ็œ‹่ฏ‘ๆ–‡
AI ็†่งฃ่ฎบๆ–‡
ๆบฏๆบๆ ‘
ๆ ทไพ‹
็”Ÿๆˆๆบฏๆบๆ ‘๏ผŒ็ ”็ฉถ่ฎบๆ–‡ๅ‘ๅฑ•่„‰็ปœ
Chat Paper
ๆญฃๅœจ็”Ÿๆˆ่ฎบๆ–‡ๆ‘˜่ฆ