Deep Reinforcement Learning for Generalizable Field Development Optimization

SPE JOURNAL(2022)

引用 20|浏览6
暂无评分
摘要
The optimization of field development plans (FDPs), which includes optimizing well counts, well locations, and the drilling sequence is crucial in reservoir management because it has a strong impact on the economics of the project. Traditional optimization studies are scenario specific, and their solutions do not generalize to new scenarios (e.g., new earth model, new price assumption) that were not seen before. In this paper, we develop an artificial intelligence (AI) using deep reinforcement learning (DRL) to address the generalizable field development optimization problem, in which the AI could provide optimized FDPs in seconds for new scenarios within the range of applicability. In the proposed approach, the problem of field development optimization is formulated as a Markov decision process (MDP) in terms of states, actions, environment, and rewards. The policy function, which is a function that maps the current reservoir state to optimal action at the next step, is represented by a deep convolution neural network (CNN). This policy network is trained using DRL on simulation runs of a large number of different scenarios generated to cover a "range of applicability." Once trained, the DRL AI can be applied to obtain optimized FDPs for new scenarios at a minimum computational cost. While the proposed methodology is general, in this paper, we applied it to develop a DRL AI that can provide optimized FDPs for greenfield primary depletion problems with vertical wells. This AI is trained on more than 3x10(6) scenarios with different geological structures, rock and fluid properties, operational constraints, and economic conditions, and thus has a wide range of applicability. After it is trained, the DRL AI yields optimized FDPs for new scenarios within seconds. The solutions from the DRL AI suggest that starting with no reservoir engineering knowledge, the DRL AI has developed the intelligence to place wells at "sweet spots," maintain proper well spacing and well count, and drill early. In a blind test, it is demonstrated that the solution from the DRL AI outperforms that from the reference agent, which is an optimized pattern drilling strategy almost 100% of the time. The DRL AI is being applied to a real field and preliminary results are promising. Because the DRL AI optimizes a policy rather than a plan for one particular scenario, it can be applied to obtain optimized development plans for different scenarios at a very low computational cost. This is fundamentally different from traditional optimization methods, which not only require thousands of runs for one scenario but also lack the ability to generalize to new scenarios.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要