谷歌Chrome浏览器插件
订阅小程序
在清言上使用

Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Map

International Conference on Learning Representations(2023)

引用 0|浏览30
暂无评分
摘要
Given that Transformers are ubiquitous in wide tasks, interpreting their internals is a pivotal issue. Still, their particular components, feed-forward (FF) blocks, have typically been less analyzed despite their substantial parameter amounts. We analyze the input contextualization effects of FF blocks by rendering them in the attention maps as a human-friendly visualization scheme. Our experiments with both masked- and causal-language models reveal that FF networks modify the input contextualization to emphasize specific types of linguistic compositions. In addition, FF and its surrounding components tend to cancel out each other's effects, suggesting potential redundancy in the processing of the Transformer layer.
更多
查看译文
关键词
transformers,attention,map,feed-forward
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要