On The Convergence of ADAM-Type Algorithms for Solving Structured Single Node and Decentralized Min-Max Saddle Point Games

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)(2022)

引用 2|浏览15
暂无评分
摘要
Many modern machine learning problems require solving min-max saddle point games, whose computational complexity is NP-hard in general. To overcome this issue, most available algorithms aim for finding a first-order Nash equilibrium solution that always exists under mild assumptions. However, the proposed algorithms for obtaining such solutions are non-adaptive and also exhibit slow convergence rates in real settings. Further, most algorithms are centralized in nature and cannot be adapted to a decentralized architecture in a straightforward manner. This study aims to address these issues by introducing general two-step adaptive algorithms for obtaining first-order Nash equilibrium solutions of min-max games in both single-node and decentralized architectures. We also obtain the non-asymptotic convergence rates of the algorithms assuming the objective functions satisfy the weak Minty variational inequality condition which is standard in recent literature. Finally, we illustrate the performance of the proposed algorithms by using them to train neural networks which are more robust against adversarial attacks compared to neural networks trained using existing algorithms.
更多
查看译文
关键词
Min-max games,Generative adversarial networks,Adaptive algorithms,Decentralized optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要