Fast Matrix-Vector Multiplications for Large-Scale Logistic Regression on Shared-Memory Systems

IEEE International Conference on DataMining(2015)

引用 36|浏览114
暂无评分
摘要
Shared-memory systems such as regular desktopsnow possess enough memory to store large data. However, thetraining process for data classification can still be slow if we donot fully utilize the power of multi-core CPUs. Many existingworks proposed parallel machine learning algorithms by modifyingserial ones, but convergence analysis may be complicated. Instead, we do not modify machine learning algorithms, butconsider those that can take the advantage of parallel matrixoperations. We particularly investigate the use of parallel sparsematrix-vector multiplications in a Newton method for largescalelogistic regression. Various implementations from easy tosophisticated ones are analyzed and compared. Results indicatethat under suitable settings excellent speedup can be achieved.
更多
查看译文
关键词
sparse matrix, parallel matrix-vector multiplication, classification, Newton method
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要