2.2 Rule No.1: First trust your brain and only then the machine

We are interpreting reservoirs from a limited amount of data – wells and seismic mostly. To palliate to this problem, we evaluate the reservoir characteristics between data points using interpolation and extrapolation techniques. Numerous mathematical techniques exist and it is up to us to select the one(s) most appropriate to a given property type (discrete/continuous), to a specific property (facies, porosity, permeability…), to the specific geological characteristics of the studied reservoir (clastic, carbonates, channels, reefs…) and to the specific purpose of the model (deterministic model / quantifying the uncertainties).

Interpolation means evaluating the property between the available data points. It is usually a well-defined problem, as the data points limit the possible range of the property. On the contrary, extrapolation means interpreting the property beyond the last data point. It is a much more difficult problem as one can’t be sure if the trend observed around the last set of data can be propagated far past the last known value. The section 2.4 will illustrate this problem. Extrapolation problems can be turned into interpolation problems by including data on the immediate surroundings of the zone of interest (see the Figure 2 in section 1.4 for an example). All evaluation techniques interpolate and extrapolate at the same time. As a result, the geomodeler should take time to review any area which seems to be mostly the result of extrapolation.

Evaluation techniques can be deterministic or probabilistic. The former give a unique solution, such as the orange geometry for horizon A (Figure 1). The later will provide multiple solutions, such as the set of possible black geometries (Figure 1). Each realization respects the input parameters, here the well picks, while showing variations between the data points. Probabilistic techniques allow taking into account the uncertainty. In Figure 1, we will never know exactly where the horizon lays between the well picks. But at least we can, and we should, quantify the level of uncertainty whenever possible.

Mathematical evaluation techniques available on our computers are useful. For example, they allow quick testing of multiple models. Also, all the input parameters can be archived and the method rerun at a later stage. But, we must never forget that these techniques are the automation of the manual evaluation techniques that we, scientists, master. As such, we should never trust blindly what computers compute for us. If the results don’t seem to make sense based on what we know about the reservoir (geological context, typical fluid characteristics, statistics at the wells…), then we must first, double-check how we used the software before eventually changing our vision of the reservoir. It must never be done the other way around. Maybe we simply didn’t use the most appropriate evaluation technique or we didn’t set its parameters correctly. Of course, no need to be extreme the other way. If everything ran as it should and the results still can’t back up the assumptions, our hypotheses might need to be updated. The remaining of this chapter illustrates this point.

Geostatistics is the largest evaluation toolbox available to us, thanks to several main types of algorithms, which can, in turn, take multiple different types of input, from the most basic to the most sophisticated. Geostatistics are powerful because these techniques not only take into account the univariate statistics (mean value, min/max values, standard deviation…), but they also take into account how the property is varying spatially between the data points. This is perfect for modelers, as many reservoir properties vary spatially. For example, rock types will have accumulated differently in different parts of the reservoir, depending on the geological context (fluvial, marine…). Porosity might be increasing with depth because of the increasing compaction. While water saturation varies spatially depending on the fluid zone (gas, oil, water) and it might also vary depending on the distance to the contact itself (transition zone above an oil-water contact).

Variograms are the key mathematical objects used to capture the spatial variability of the data. They are input to kriging and simulation techniques. Variograms are to the understanding of spatial variability what histograms are to the understanding of univariate statistics: essential. For this reason, variograms are explained in the next section to some details so that every asset team member can understand how their reservoir modeler defined them in their project.

Once the notion of variogram is explained, the remainder of this chapter goes through a simplified 2D dataset of a fluvial system to illustrate the results obtained by these two types of techniques.

  • Hits: 3158