Multi-gpu training and parallel cpu computing for the machine learning experiments using ariadne library

P. Goncharov, A. Nikolskaia,G. Ososkov, E. Rezvaya,D. Rusov,E. Shchavelev

9th International Conference "Distributed Computing and Grid Technologies in Science and Education"(2021)

Cited 0|Views14
No score
Abstract
Modern machine learning (ML) tasks and neural network (NN) architectures require huge amounts ofGPU computational facilities and demand high CPU parallelization for data preprocessing. At thesame time, the Ariadne library, which aims to solve complex high-energy physics tracking tasks withthe help of deep neural networks, lacks multi-GPU training and efficient parallel data preprocessing onthe CPU.In our work, we present our approach for the Multi-GPU training in the Ariadne library. We willpresent efficient data-caching, parallel CPU data preprocessing, generic ML experiment setup forprototyping, training, and inference deep neural network models. Results in terms of speed-up andperformance for the existing neural network approaches are presented with the help of GOVORUNcomputing resources.
More
Translated text
Key words
ariadne library,parallel cpu computing,machine learning experiments,multi-gpu
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined