Social Learning with Non-Bayesian Local Updates

2023 31st European Signal Processing Conference (EUSIPCO)(2023)

引用 0|浏览3
暂无评分
摘要
In non-Bayesian social learning, the agents of a network form their belief about a hypothesis of interest by performing individual Bayesian updates, which are then shared with their neighbors and aggregated according to a suitable pooling rule. This social learning scheme is called non-Bayesian because the pooling rule cannot be Bayesian owing to the limitations arising from the distributed learning setting. However, traditional non-Bayesian learning relies on using a local Bayesian update rule. In this work, we move away from this assumption and consider instead non-Bayesian learning with non-Bayesian updates. Taking as a benchmark the optimal centralized posterior, we show that this modified strategy can outperform traditional social learning and that, intriguingly, it can attain the same error exponent as the optimal scheme under two opposite scenarios: when the data are independent across the agents and when there are agents with highly dependent data.
更多
查看译文
关键词
Social learning,Bayesian update,Large deviations,Opinion formation,Distributed decision-making
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要