Parallel Data-Local Training for Optimizing Word2Vec Embeddings for Word and Graph Embeddings

2019 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC)(2019)

引用 5|浏览40
暂无评分
摘要
The Word2Vec model is a neural network-based unsupervised word embedding technique widely used in applications such as natural language processing, bioinformatics and graph mining. As Word2Vec repeatedly performs Stochastic Gradient Descent (SGD) to minimize the objective function, it is very compute-intensive. However, existing methods for parallelizing Word2Vec are not optimized enough for data locality to achieve high performance. In this paper, we develop a parallel data-locality-enhanced Word2Vec algorithm based on Skip-gram with a novel negative sampling method that decouples loss calculation with positive and negative samples; this allows us to efficiently reformulate matrix-matrix operations for the negative samples over the sentence. Experimental results demonstrate our parallel implementations on multi-core CPUs and GPUs achieve significant performance improvement over the existing state-of-the-art parallel Word2Vec implementations while maintaining evaluation quality. We also show the utility of our Word2Vec implementation within the Node2Vec algorithm which accelerates embedding learning for large graphs.
更多
查看译文
关键词
performance improvement,GPU,multicore CPU,matrix-matrix operations,Skip-gram,stochastic gradient descent method,graph embeddings,Node2Vec algorithm,Word2Vec implementation,negative sampling method,parallel data-locality-enhanced Word2Vec algorithm,neural network-based unsupervised word embedding technique,Word2Vec model,Word2Vec embeddings,parallel data-local training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要