One way of classifying history matching methods is in terms of how the particular methods explore the parameter space versus exploit local regions of the parameter space to find a minimum value of the objective function. The figure on the right describes schematically the various methods which have been applied to history matching problems. In the oil & gas industry, the initial efforts in history matching have principally employed gradient methods (exploitation) which spend computational time finding the single best combination of parameters which minimize the objective function. Recently however, efforts have switched, and a growing consensus in the oil & gas industry is that methods that emphasize exploration are preferable. The reasons are as follows:
|
This last point above is very important and should be re-emphasized. Reservoir engineers with experience know that forecasting behavior from a single, history-matched reservoir model is problematic. Accurate forecasting is very challenging for many reservoirs, yet decisions costing the oil company millions of dollars are being made based upon a forecast from a single model. The problem is essentially that one cannot deduce uncertainty from a single model, as shown in the figure below.
Given a large amount of time and infinite computer resources for a particular problem, the ideal approach to history matching would be use the sample evenly all possible combinations of parameter values (Uniform Search). In this ideal circumstance, one would create a very large number of reservoir models and compare the reservoir responses with historical data. If the prior model is defined properly (see the discussion on pre-HM screening), one or more history matched models can be defined. This approach describes a very basic (and prohibitively expensive) sampling method, but illustrates the concept of sampling and exploration of the parameter space. Other sampling methods attempt to be more efficient with the computational resources, yet are in general very costly.
Optimization methods sacrifice exploration of the parameter space for exploitation, and therefore (in general) use less computational resources to find history matched models compared to sampling. However, the sacrifice may be that not all the models which match history are found. The risk in this approach is that the reservoir forecasts employed to quantify the uncertainty may be too narrow (under-estimation of uncertainty) compared to a sampling procedure. The degree of under-estimation of uncertainty is rarely ever known.
All optimization methods have the following requirements:
Selection of the parameters, objective function, and workflow are subjective decisions by the reservoir team, and are an integral part of history matching, uncertainty quantification, and decision making.
Ensemble Kalman Filter (EnKF) methods have received a tremendous amount of attention in the research literature. EnKF uses an ensemble of reservoir models (>100) to calculate covariances between the model input parameters and the model responses. One way of considering these covariances is as gradients, which are then used to minimize an objective function. The advantages of EnKF methods is that they can modify inter-well petrophysical properties (normally done in the model refinement step), and account automatically for correlations between parameter and response. In addition, multiple history matched models are created through this procedure. There are several potential limitations to EnKF. EnKF methods are:
As described above, sampling methods are very computationally costly, yet theoretically are very rigorous and fit perfectly into the framework of uncertainty quantification. Many procedures have been proposed to reduce the CPU requirements, such as response surfaces.
Corporate Headquarters
StreamSim Technologies, Inc.
865 25th Avenue
San Francisco, CA 94121
U.S.A.
Tel: (415) 386-0165
Canada Office
StreamSim Technologies, Inc.
Suite 102A - 625 14th Street N.W.
Calgary, Alberta T2N 2A1
Canada
Tel: (403) 270-3945