Ensemble-based Reservoir History Matching Using Hyper-reduced-order Models

Ensemble-based Reservoir History Matching Using Hyper-reduced-order Models PDF Author: Seonkyoo Yoon
Publisher:
ISBN:
Category :
Languages : en
Pages : 106

Get Book Here

Book Description
Subsurface flow modeling is an indispensable task for reservoir management, but the associated computational cost is burdensome owing to model complexity and the fact that many simulation runs are required for its applications such as production optimization, uncertainty quantification, and history matching. To relieve the computational burden in reservoir flow modeling, a reduced-order modeling procedure based on hyper-reduction is presented. The procedure consists of three components: state reduction, constraint reduction, and nonlinearity treatment. State reduction based on proper orthogonal decomposition (POD) is considered, and the impact of state reduction, with different strategies for collecting snapshots, on accuracy and predictability is investigated. Petrov- Galerkin projection is used for constraint reduction, and a hyper-reduction that couples the Petrov-Galerkin projection and a 'gappy' reconstruction is applied for the nonlinearity treatment. The hyper-reduction method is a Gauss-Newton framework with approximated tensors (GNAT), and the main contribution of this study is the presentation of a procedure for applying the method to subsurface flow simulation. A fully implicit oil-water two-phase subsurface flow model in three-dimensional space is considered, and the application of the proposed hyper-reduced-order modeling procedure achieves a runtime speedup of more than 300 relative to the full-order method, which cannot be achieved when only constraint reduction is adopted. In addition, two types of sequential Bayesian filtering for history matching are considered to investigate the performance of the developed hyper-reduced-order model to relive the associated computational cost. First, an ensemble Kalman filter (EnKF) is considered for Gaussian system and a procedure embedding the hyper-reduced model (HRM) into the EnKF is presented. The use of the HRM for the EnKF significantly reduces the computational cost without much loss of accuracy, but the combination requires a few remedies such as clustering to find an optimum reduced-order model according to spatial similarity of geological condition, which causes an additional computation. For non-Gaussian system, an advanced particle filter, known as regularized particle filter (RPF), is considered because it does not take any distributional assumptions. Particle filtering has rarely been applied for reservoir history matching due to the fact that it is hard to locate the initial particles on highly probable regions of state spaces especially when large scale system is considered, which makes the required number of particles scale exponentially with the model dimension. To resolve the issues, reparameterization is adopted to reduce the order of the geological parameters. For the reparameterization, principal component analysis (PCA) is used to compute the reduced space of the model parameters, and by constraining the filtering analysis with the computed subspace the required number of initial particles can be reduced down to a manageable level. Consequently, a huge computational saving is achieved by embedding the HRM into the RPF. Furthermore, the additional cost of clustering required to identify the geospatially optimum reduced-order model is saved because the advanced particle filter allows to easily identify the groups of geospatially similar particles.

Ensemble-based Reservoir History Matching Using Hyper-reduced-order Models

Ensemble-based Reservoir History Matching Using Hyper-reduced-order Models PDF Author: Seonkyoo Yoon
Publisher:
ISBN:
Category :
Languages : en
Pages : 106

Get Book Here

Book Description
Subsurface flow modeling is an indispensable task for reservoir management, but the associated computational cost is burdensome owing to model complexity and the fact that many simulation runs are required for its applications such as production optimization, uncertainty quantification, and history matching. To relieve the computational burden in reservoir flow modeling, a reduced-order modeling procedure based on hyper-reduction is presented. The procedure consists of three components: state reduction, constraint reduction, and nonlinearity treatment. State reduction based on proper orthogonal decomposition (POD) is considered, and the impact of state reduction, with different strategies for collecting snapshots, on accuracy and predictability is investigated. Petrov- Galerkin projection is used for constraint reduction, and a hyper-reduction that couples the Petrov-Galerkin projection and a 'gappy' reconstruction is applied for the nonlinearity treatment. The hyper-reduction method is a Gauss-Newton framework with approximated tensors (GNAT), and the main contribution of this study is the presentation of a procedure for applying the method to subsurface flow simulation. A fully implicit oil-water two-phase subsurface flow model in three-dimensional space is considered, and the application of the proposed hyper-reduced-order modeling procedure achieves a runtime speedup of more than 300 relative to the full-order method, which cannot be achieved when only constraint reduction is adopted. In addition, two types of sequential Bayesian filtering for history matching are considered to investigate the performance of the developed hyper-reduced-order model to relive the associated computational cost. First, an ensemble Kalman filter (EnKF) is considered for Gaussian system and a procedure embedding the hyper-reduced model (HRM) into the EnKF is presented. The use of the HRM for the EnKF significantly reduces the computational cost without much loss of accuracy, but the combination requires a few remedies such as clustering to find an optimum reduced-order model according to spatial similarity of geological condition, which causes an additional computation. For non-Gaussian system, an advanced particle filter, known as regularized particle filter (RPF), is considered because it does not take any distributional assumptions. Particle filtering has rarely been applied for reservoir history matching due to the fact that it is hard to locate the initial particles on highly probable regions of state spaces especially when large scale system is considered, which makes the required number of particles scale exponentially with the model dimension. To resolve the issues, reparameterization is adopted to reduce the order of the geological parameters. For the reparameterization, principal component analysis (PCA) is used to compute the reduced space of the model parameters, and by constraining the filtering analysis with the computed subspace the required number of initial particles can be reduced down to a manageable level. Consequently, a huge computational saving is achieved by embedding the HRM into the RPF. Furthermore, the additional cost of clustering required to identify the geospatially optimum reduced-order model is saved because the advanced particle filter allows to easily identify the groups of geospatially similar particles.

Reservoir History Matching Using Constrained Ensemble Kalman Filter and Particle Filer Methods

Reservoir History Matching Using Constrained Ensemble Kalman Filter and Particle Filer Methods PDF Author: Abhiniandhan Raghu
Publisher:
ISBN:
Category : Ecological heterogeneity
Languages : en
Pages : 126

Get Book Here

Book Description
The high heterogeneity of petroleum reservoirs, represented by their spatially varying rock properties (porosity and permeability), greatly dictates the quantity of recoverable oil. In this work, the estimation of these rock properties, which is crucial for the future performance prediction of a reservoir, is carried out through a history matching technique using constrained ensemble Kalman filtering (EnKF) and particle filtering (PF) methods. The first part of the thesis addresses some of the main limitations of the conventional EnKF. The EnKF, formulated on the grounds of Monte Carlo sampling and the Kalman filter (KF), arrives at estimates of parameters based on statistical analysis and hence could potentially yield reservoir parameter estimates that are not geologically realistic and consistent. In order to overcome this limitation, hard and soft data constraints in the recursive EnKF estimation methodology are incorporated. Hard data refers to the actual values of the reservoir parameters at discrete locations obtained by core sampling and well logging. On the other hand, the soft data considered here is obtained from the variogram, which characterize the spatial correlation of the rock properties in a reservoir. In this algorithm, the correlation matrix obtained after the unconstrained EnKF update is transformed to honour the true correlation structure from the variogram by applying a scaling and projection method. This thesis also deals with the problem of spurious correlation induced by the Kalman gain computations in the EnKF update step, potentially leading to erroneous update of parameters. In order to solve this issue, a covariance localization-based EnKF coupled with geostatistics is implemented in reservoir history matching. These algorithms are implemented on two synthetic reservoir models and their efficacy in yielding estimates consistent with the geostatistics is observed. It is found that the computational time involved in the localization-based EnKF framework for reservoir history matching is considerably reduced owing to the reduction in the size of the parameter space in the EnKF update step. Also, the geostatistics-based covariance localization performs better in capturing the spatial heterogeneity and variability of the reservoir permeability than the traditional methods. In the second part of the thesis, we extend the history matching implementation using the particle filtering. Reservoir models, being nonlinear, the distributions of the noise and parameters are generally non-Gaussian in nature. Since the EnKF may fail to obtain accurate estimates when the distributions involved in the model are non-Gaussian, we attempt to use a completely Bayesian filter, the particle filter, to estimate reservoir parameters. In addition, the geostatistics-based covariance localization is also coupled with the particle filter and is found to perform better than the filter without any localization.

Assisted History Matching for Unconventional Reservoirs

Assisted History Matching for Unconventional Reservoirs PDF Author: Sutthaporn Tripoppoom
Publisher: Gulf Professional Publishing
ISBN: 0128222433
Category : Science
Languages : en
Pages : 290

Get Book Here

Book Description
As unconventional reservoir activity grows in demand, reservoir engineers relying on history matching are challenged with this time-consuming task in order to characterize hydraulic fracture and reservoir properties, which are expensive and difficult to obtain. Assisted History Matching for Unconventional Reservoirs delivers a critical tool for today's engineers proposing an Assisted History Matching (AHM) workflow. The AHM workflow has benefits of quantifying uncertainty without bias or being trapped in any local minima and this reference helps the engineer integrate an efficient and non-intrusive model for fractures that work with any commercial simulator. Additional benefits include various applications of field case studies such as the Marcellus shale play and visuals on the advantages and disadvantages of alternative models. Rounding out with additional references for deeper learning, Assisted History Matching for Unconventional Reservoirs gives reservoir engineers a holistic view on how to model today's fractures and unconventional reservoirs. - Provides understanding on simulations for hydraulic fractures, natural fractures, and shale reservoirs using embedded discrete fracture model (EDFM) - Reviews automatic and assisted history matching algorithms including visuals on advantages and limitations of each model - Captures data on uncertainties of fractures and reservoir properties for better probabilistic production forecasting and well placement

Reservoir Characterization and History Matching with Uncertainty Quantification Using Ensemble-based Data Assimilation with Data Re-parameterization

Reservoir Characterization and History Matching with Uncertainty Quantification Using Ensemble-based Data Assimilation with Data Re-parameterization PDF Author: Mingliang Liu
Publisher:
ISBN:
Category : Carbon sequestration
Languages : en
Pages : 153

Get Book Here

Book Description
Reservoir characterization and history matching are essential steps in various subsurface applications, such as petroleum exploration and production and geological carbon sequestration, aiming to estimate the rock and fluid properties of the subsurface from geophysical measurements and borehole data. Mathematically, both tasks can be formulated as inverse problems, which attempt to find optimal earth models that are consistent with the true measurements. The objective of this dissertation is to develop a stochastic inversion method to improve the accuracy of predicted reservoir properties as well as quantification of the associated uncertainty by assimilating both the surface geophysical observations and the production data from borehole using Ensemble Smoother with Multiple Data Assimilation. To avoid the common phenomenon of ensemble collapse in which the model uncertainty would be underestimated, we propose to re-parameterize the high-dimensional geophysics data with data order reduction methods, for example, singular value decomposition and deep convolutional autoencoder, and then perform the models updating efficiently in the low-dimensional data space. We first apply the method to seismic and rock physics inversion for the joint estimation of elastic and petrophysical properties from the pre-stack seismic data. In the production or monitoring stage, we extend the proposed method to seismic history matching for the prediction of porosity and permeability models by integrating both the time-lapse seismic and production data. The proposed method is tested on synthetic examples and successfully applied in petroleum exploration and production and carbon dioxide sequestration.

History-matching of Petroleum Reservoir Models by the Ensemble Kalman Filter and Parameterization Methods

History-matching of Petroleum Reservoir Models by the Ensemble Kalman Filter and Parameterization Methods PDF Author: Leila Heidari
Publisher:
ISBN:
Category :
Languages : en
Pages : 224

Get Book Here

Book Description
History-matching enables integration of data acquired after the production in the reservoir model building workflow. Ensemble Kalman Filter (EnKF) is a sequential assimilation or history-matching method capable of integrating the measured data as soon as they are obtained. This work is based on the EnKF application for History-matching purposes and is divided into two main sections. First section deals with the application of the EnKF to several case studies in order to better understand the merits and shortcomings of the method. These case studies include two synthetic case studies (a simple one and a rather complex one), a Facies model and a real reservoir model. In most cases the method is successful in reproducing the measured data. The encountered problems are explained and possible solutions are proposed. Second section deals with two newly proposed algorithms combining the EnKF with two parameterization methods: pilot point method and gradual deformation method, which are capable of preserving second order statistical properties (mean and covariance). Both developed algorithms are applied to the simple synthetic case study. For the pilot point method, the application was successful through an adequate number and proper positioning of pilot points. In case of the gradual deformation, the application can be successful provided the background ensemble is large enough. For both cases, some improvement scenarios are proposed and further applications to more complex scenarios are recommended.

Heterogeneous Reservoir Characterization Utilizing Efficient Geology Preserving Reservoir Parameterization Through Higher Order Singular Value Decomposition (HOSVD)

Heterogeneous Reservoir Characterization Utilizing Efficient Geology Preserving Reservoir Parameterization Through Higher Order Singular Value Decomposition (HOSVD) PDF Author: Sardar Afra
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
Petroleum reservoir parameter inference is a challenging problem to many of the reservoir simulation work flows, especially when it comes to real reservoirs with high degree of complexity and non-linearity, and high dimensionality. In fact, the process of estimating a large number of unknowns in an inverse problem lead to a very costly computational effort. Moreover, it is very important to perform geologically consistent reservoir parameter adjustments as data is being assimilated in the history matching process, i.e., the process of adjusting the parameters of reservoir system in order to match the output of the reservoir model with the previous reservoir production data. As a matter of fact, it is of great interest to approximate reservoir petrophysical properties like permeability and porosity while reparameterizing these parameters through reduced-order models. As we will show, petroleum reservoir models are commonly described by in general complex, nonlinear, and large-scale, i.e., large number of states and unknown parameters. Thus, having a practical approach to reduce the number of reservoir parameters in order to reconstruct the reservoir model with a lower dimensionality is of high interest. Furthermore, de-correlating system parameters in all history matching and reservoir characterization problems keeping the geological description intact is paramount to control the ill-posedness of the system. In the first part of the present work, we will introduce the advantages of a novel parameterization method by means of higher order singular value decomposition analysis (HOSVD). We will show that HOSVD outperforms classical parameterization techniques with respect to computational and implementation cost. It also, provides more reliable and accurate predictions in the petroleum reservoir history matching problem due to its capability to preserve geological features of the reservoir parameter like permeability. The promising power of HOSVD is investigated through several synthetic and real petroleum reservoir benchmarks and all results are compared to that of classic SVD. In addition to the parameterization problem, we also addressed the ability of HOSVD in producing accurate production data comparing to those of original reservoir system. To generate the results of the present work, we employ a commercial reservoir simulator known as ECLIPSE. In the second part of the work, we will address the inverse modeling, i.e., the reservoir history matching problem. We employed the ensemble Kalman filter (EnKF) which is an ensemble-based characterization approach to solve the inverse problem. We also, integrate our new parameterization technique into the EnKF algorithm to study the suitability of HOSVD based parameterization for reducing the dimensionality of parameter space and for estimating geologically consistence permeability distributions. The results of the present work illustrates the characteristics of the proposed parameterization method by several numerical examples in the second part including synthetic and real reservoir benchmarks. Moreover, the HOSVD advantages are discussed by comparing its performance to the classic SVD (PCA) parameterization approach. In the first part of the present work, we will introduce the advantages of a novel parameterization method by means of higher order singular value decomposition analysis (HOSVD). We will show that HOSVD outperforms classical parameterization techniques with respect to computational and implementation cost. It also, provides more reliable and accurate predictions in the petroleum reservoir history matching problem due to its capability to preserve geological features of the reservoir parameter like permeability. The promising power of HOSVD is investigated through several synthetic and real petroleum reservoir benchmarks and all results are compared to that of classic SVD. In addition to the parameterization problem, we also addressed the ability of HOSVD in producing accurate production data comparing to those of original reservoir system. To generate the results of the present work, we employ a commercial reservoir simulator known as ECLIPSE. In the second part of the work, we will address the inverse modeling, i.e., the reservoir history matching problem. We employed the ensemble Kalman filter (EnKF) which is an ensemble-based characterization approach to solve the inverse problem. We also, integrate our new parameterization technique into the EnKF algorithm to study the suitability of HOSVD based parameterization for reducing the dimensionality of parameter space and for estimating geologically consistence permeability distributions. The results of the present work illustrate the characteristics of the proposed parameterization method by several numerical examples in the second part including synthetic and real reservoir benchmarks. Moreover, the HOSVD advantages are discussed by comparing its performance to the classic SVD (PCA) parameterization approach. The electronic version of this dissertation is accessible from http://hdl.handle.net/1969.1/154968

PCA-based Reduced Order Models in Oil Reservoir Simulation and Optimization

PCA-based Reduced Order Models in Oil Reservoir Simulation and Optimization PDF Author: Mahmud R. Siamizade
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description


Ensemble-based History Matching and Its Application in Estimating Reservoir Petrophysical Properties

Ensemble-based History Matching and Its Application in Estimating Reservoir Petrophysical Properties PDF Author: Heng Li
Publisher:
ISBN:
Category : Oil reservoir engineering
Languages : en
Pages : 402

Get Book Here

Book Description
A method of integrating the ensemble-based history matching with the geostatistical technique has been developed and successfully applied to improve estimation of the absolute permeability and porosity. The newly developed technique is validated with a synthetic reservoir. Compared to the existing implicit estimation approaches, the newly developed technique does not require the gradient of the objective function, and thus it is easy to implement.

History Matching and Uncertainty Characterization

History Matching and Uncertainty Characterization PDF Author: Alexandre Emerick
Publisher: LAP Lambert Academic Publishing
ISBN: 9783659107283
Category :
Languages : en
Pages : 264

Get Book Here

Book Description
In the last decade, ensemble-based methods have been widely investigated and applied for data assimilation of flow problems associated with atmospheric physics and petroleum reservoir history matching. Among these methods, the ensemble Kalman filter (EnKF) is the most popular one for history-matching applications. The main advantages of EnKF are computational efficiency and easy implementation. Moreover, because EnKF generates multiple history-matched models, EnKF can provide a measure of the uncertainty in reservoir performance predictions. However, because of the inherent assumptions of linearity and Gaussianity and the use of limited ensemble sizes, EnKF does not always provide an acceptable history-match and does not provide an accurate characterization of uncertainty. In this work, we investigate the use of ensemble-based methods, with emphasis on the EnKF, and propose modifications that allow us to obtain a better history match and a more accurate characterization of the uncertainty in reservoir description and reservoir performance predictions.

Continuous Reservoir Model Updating Using an Ensemble Kalman Filter with a Streamline-based Covariance Localization

Continuous Reservoir Model Updating Using an Ensemble Kalman Filter with a Streamline-based Covariance Localization PDF Author: Elkin Rafael Arroyo Negrete
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
This work presents a new approach that combines the comprehensive capabilitiesof the ensemble Kalman filter (EnKF) and the flow path information from streamlines to eliminate and/or reduce some of the problems and limitations of the use of the EnKF for history matching reservoir models. The recent use of the EnKF for data assimilation and assessment of uncertainties in future forecasts in reservoir engineering seems to be promising. EnKF provides ways of incorporating any type of production data or timelapse seismic information in an efficient way. However, the use of the EnKF in history matching comes with its shares of challenges and concerns. The overshooting of parameters leading to loss of geologic realism, possible increase in the material balance errors of the updated phase(s), and limitations associated with non-Gaussian permeability distribution are some of the most critical problems of the EnKF. The use of larger ensemble size may mitigate some of these problems but are prohibitively expensive inpractice. We present a streamline-based conditioning technique that can be implemented with the EnKF to eliminate or reduce the magnitude of these problems, allowing for the use of a reduced ensemble size, thereby leading to significant savings in time during field scale implementation. Our approach involves no extra computational cost and is easy to implement. Additionally, the final history matched model tends to preserve most of the geological features of the initial geologic model. A quick look at the procedure is provided that enables the implementation of this approach into the current EnKF implementations. Our procedure uses the streamline path information to condition the covariance matrix in the Kalman Update. We demonstrate the power and utility of our approach with synthetic examples and a field case. Our result shows that using the conditioned technique presented in this thesis, the overshooting/under shooting problems disappears and the limitation to work with non-Gaussian distribution is reduced. Finally, an analysis of the scalability in a parallel implementation of our computer code is given.