谷歌浏览器插件
订阅小程序
在清言上使用

A 28nm 0.22μj/token Memory-Compute-Intensity-Aware CNN-Transformer Accelerator with Hybrid-Attention-Based Layer-Fusion and Cascaded Pruning for Semantic-Segmentation

IEEE International Solid-State Circuits Conference(2025)

引用 0|浏览6
关键词
Energy Consumption,Decoding,Sparsity,Receptive Field,Transformer Model,Computational Overhead,CNN Model,Open Reduction,Semantic Segmentation Task,Hardware Accelerators,Language Processing Tasks,External Access,Left Matrix,Convolutional Weights,Backbone Segments
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要