Petroleum Science >2023, Issue1: - DOI: https://doi.org/10.1016/j.petsci.2022.08.016
Evolutionary-assisted reinforcement learning for reservoir real-time production optimization under uncertainty Open Access
文章信息
作者:Zhong-Zheng Wang, Kai Zhang, Guo-Dong Chen, Jin-Ding Zhang, Wen-Dong Wang, Hao-Chen Wang, Li-Ming Zhang, Xia Yan, Jun Yao
作者单位:
投稿时间:
引用方式:Zhong-Zheng Wang, Kai Zhang, Guo-Dong Chen, Jin-Ding Zhang, Wen-Dong Wang, Hao-Chen Wang, Li-Ming Zhang, Xia Yan, Jun Yao, Evolutionary-assisted reinforcement learning for reservoir real-time production optimization under uncertainty, Petroleum Science, Volume 20, Issue 1, 2023, Pages 261-276, https://doi.org/10.1016/j.petsci.2022.08.016.
文章摘要
Abstract: Production optimization has gained increasing attention from the smart oilfield community because it can increase economic benefits and oil recovery substantially. While existing methods could produce high-optimality results, they cannot be applied to real-time optimization for large-scale reservoirs due to high computational demands. In addition, most methods generally assume that the reservoir model is deterministic and ignore the uncertainty of the subsurface environment, making the obtained scheme unreliable for practical deployment. In this work, an efficient and robust method, namely evolutionary-assisted reinforcement learning (EARL), is proposed to achieve real-time production optimization under uncertainty. Specifically, the production optimization problem is modeled as a Markov decision process in which a reinforcement learning agent interacts with the reservoir simulator to train a control policy that maximizes the specified goals. To deal with the problems of brittle convergence properties and lack of efficient exploration strategies of reinforcement learning approaches, a population-based evolutionary algorithm is introduced to assist the training of agents, which provides diverse exploration experiences and promotes stability and robustness due to its inherent redundancy. Compared with prior methods that only optimize a solution for a particular scenario, the proposed approach trains a policy that can adapt to uncertain environments and make real-time decisions to cope with unknown changes. The trained policy, represented by a deep convolutional neural network, can adaptively adjust the well controls based on different reservoir states. Simulation results on two reservoir models show that the proposed approach not only outperforms the RL and EA methods in terms of optimization efficiency but also has strong robustness and real-time decision capacity.
关键词
-
Keywords: Production optimization; Deep reinforcement learning; Evolutionary algorithm; Real-time optimization; Optimization under uncertainty