Accelerating Variants of the Conjugate Gradient with the Variable Precision Processor

Yves Durand,Eric Guthmuller,Cesar Fuguet, Jérôme Fereyre,Andrea Bocco, Riccardo Alidori

2022 IEEE 29th Symposium on Computer Arithmetic (ARITH)(2022)

引用 0|浏览6
暂无评分
摘要
Linear algebra kernels such as linear solvers, eigen-solvers are the actual working engine underneath many scientific applications. The growing scale of these applications has led researchers to rely on high-precision computing for improving their efficiency and their stability. In this work, we investigate the impact of arbitrary extended precision on multiple variants of the Conjugate Gradient method (CG). We show how our VRP processor improves the convergence and the efficiency of these kernels. We also illustrate how our set of tools (library, software environment) enables to migrate legacy applications in a fast and intuitive way while preserving high-performance. We observe up to an 8X improvements on kernel iteration count, and up to a 40 % improvement on latency. Nevertheless, the main benefit is the stability gained with the precision. It makes it possible to resolve larger and ill-conditioned systems without costly compensating techniques.
更多
查看译文
关键词
Variable Precision,Linear Algebra Kernels,Scientific Computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要