Accelerating Reservoir Simulators using GPU Technology

John R. Appleyard,Jeremy D. Appleyard, Mark A. Wakefield, Arnaud L. Desitter

All Days(2011)

Cited 0|Views0
No score
Abstract
Abstract The recent advances in graphics processing units (GPU) and associated development environments suitable for scientific modeling has generated significant interest in the high performance computing arena. In this paper we investigate strategies to incorporate this new technology into an existing commercial reservoir simulator. The use of the GPU for solving linear systems is demonstrated and the algorithmic considerations required in order to exploit the hardware are discussed. The paper describes a massively parallel incomplete factorization which is used as a preconditioner in conjunction with the GMRES algorithm. This factorization balances parallelism with accuracy resulting in a method that is significantly faster than the current serial solver when both are implemented on current hardware, despite in general requiring more iterations to converge. The performance of the implementation is shown to be dependent on problem size and indicates that when fully loaded the GPU is capable of producing a factor of 10 speed-up in the linear solver compared with the CPU based serial solver. The algorithm can also be applied on cluster systems, using domain decomposition, and although no numerical results are yet available, there are grounds for anticipating that performance will scale well for sufficiently large problems. The paper also discusses the benefits of migrating other simulator components of the simulator, such as the Jacobian matrix assembly, to the GPU. It is shown that this improves the overall performance of the simulator considerably.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined