首页»
最新录用
Petroleum Science > DOI: https://doi.org/10.1016/j.petsci.2025.03.009
Optimization of plunger lift working systems using reinforcement learning for coupled wellbore/reservoir Open Access
文章信息
作者:Zhi-Sheng Xing, Guo-Qing Han, You-Liang Jia, Wei Tian, Hang-Fei Gong, Wen-Bo Jiang, Pei-Dong Mai, Xing-Yuan Liang
作者单位:
投稿时间:
引用方式:Zhi-Sheng Xing, Guo-Qing Han, You-Liang Jia, Wei Tian, Hang-Fei Gong, Wen-Bo Jiang, Pei-Dong Mai, Xing-Yuan Liang, Optimization of plunger lift working systems using reinforcement learning for coupled wellbore/reservoir, Petroleum Science, 2025, https://doi.org/10.1016/j.petsci.2025.03.009.
文章摘要
Abstract: In the mid-to-late stages of gas reservoir development, liquid loading in gas wells becomes a common challenge. Plunger lift, as an intermittent production technique, is widely used for deliquification in gas wells. With the advancement of big data and artificial intelligence, the future of oil and gas field development is trending towards intelligent, unmanned, and automated operations. Currently, the optimization of plunger lift working systems is primarily based on expert experience and manual control, focusing mainly on the success of the plunger lift without adequately considering the impact of different working systems on gas production. Additionally, liquid loading in gas wells is a dynamic process, and the intermittent nature of plunger lift requires accurate modeling; using constant inflow dynamics to describe reservoir flow introduces significant errors. To address these challenges, this study establishes a coupled wellbore–reservoir model for plunger lift wells and validates the computational wellhead pressure results against field measurements. Building on this model, a novel optimization control algorithm based on the deep deterministic policy gradient (DDPG) framework is proposed. The algorithm aims to optimize plunger lift working systems to balance overall reservoir pressure, stabilize gas–water ratios, and maximize gas production. Through simulation experiments in three different production optimization scenarios, the effectiveness of reinforcement learning algorithms (including RL, PPO, DQN, and the proposed DDPG) and traditional optimization algorithms (including GA, PSO, and Bayesian optimization) in enhancing production efficiency is compared. The results demonstrate that the coupled model provides highly accurate calculations and can precisely describe the transient production of wellbore and gas reservoir systems. The proposed DDPG algorithm achieves the highest reward value during training with minimal error, leading to a potential increase in cumulative gas production by up to 5% and cumulative liquid production by 252%. The DDPG algorithm exhibits robustness across different optimization scenarios, showcasing excellent adaptability and generalization capabilities.
关键词
-
Keywords: Plunger lift; Liquid loading; Deliquification; Reinforcement learning; DDPG; Artificial intelligence