$y=A\times x$ "/>

b8c: SpMV accelerator implementation leveraging high memory bandwidth

2023 IEEE 31ST ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES, FCCM(2023)

Cited 0|Views26
No score
Abstract
Sparse Matrix-Vector multiplication (SpMV), computing $y=A\times x$ where $y, x$ are dense vectors and $A$ is a sparse matrix, is a key kernel in many HPC applications. Vitis Sparse Library's double precision SpMV (VSpMV) [1] is, to the best of our knowledge, the only performance-oriented, double-precision (64-bit) floating point implementation of SpMV on FPGAs equipped with High Bandwidth Memory (HBM).
More
Translated text
Key words
FPGA,SpMV,HBM,HLS,double precision,High performance computing,Sparse matrix representation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined