Energy-Aware Heterogeneous Federated Learning via Approximate Systolic DNN Accelerators
CoRR(2024)
摘要
In Federated Learning (FL), devices that participate in the training usually
have heterogeneous resources, i.e., energy availability. In current deployments
of FL, devices that do not fulfill certain hardware requirements are often
dropped from the collaborative training. However, dropping devices in FL can
degrade training accuracy and introduce bias or unfairness. Several works have
tacked this problem on an algorithmic level, e.g., by letting constrained
devices train a subset of the server neural network (NN) model. However, it has
been observed that these techniques are not effective w.r.t. accuracy.
Importantly, they make simplistic assumptions about devices' resources via
indirect metrics such as multiply accumulate (MAC) operations or peak memory
requirements. In this work, for the first time, we consider on-device
accelerator design for FL with heterogeneous devices. We utilize compressed
arithmetic formats and approximate computing, targeting to satisfy limited
energy budgets. Using a hardware-aware energy model, we observe that, contrary
to the state of the art's moderate energy reduction, our technique allows for
lowering the energy requirements (by 4x) while maintaining higher accuracy.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要