NeurOPar, A Neural Network-driven EDP Optimization Strategy for Parallel Workloads

2023 IEEE 35TH INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING, SBAC-PAD(2023)

引用 0|浏览6
暂无评分
摘要
The pursuit of energy efficiency has been driving the development of techniques to optimize hardware resource usage in high-performance computing (HPC) servers. On multicore architectures, thread-level parallelism (TLP) exploitation, dynamic voltage and frequency scaling (DVFS), and uncore frequency scaling (UFS) are three popular methods applied to improve the trade-off between performance and energy consumption, represented by the energy-delay product (EDP). However, the complexity of selecting the optimal configuration (TLP degree, DVFS, and UFS) for each application poses a challenge to software developers and end-users due to the massive number of possible configurations. To tackle this challenge, we propose NeurOPar, an optimization strategy for parallel workloads driven by an artificial neural network (ANN). It uses representative hardware and software metrics to build and train an ANN model that predicts combinations of thread count and core/uncore frequency levels that provide optimal EDP results. Through experiments on four multicore processors using twenty-five applications, we demonstrate that NeurOPar predicts combinations that yield EDP values close to the best ones achieved by an exhaustive search and improve the overall EDP by 42% compared to the default execution of HPC applications. We also show that NeurOPar can enhance the execution of parallel applications without incurring the performance and energy penalties associated with online methods by comparing it with two state-of-the-art strategies.
更多
查看译文
关键词
Parallel Computing,Artificial Neural-Network,Performance-Energy Optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要