4.3 Defining the resolution of the 3D geological grid by upscaling the well logs

One of the first tasks of a geomodel project is to decide at which resolution the model will be built. Shall we use cell size of 100m*100m horizontally? Finer maybe? 50m*50m? Coarser, such level of details being unnecessary? 250m*250m maybe? And what about the vertical cell size? 5m? 1m? 0.1m? Some geomodelers use the resolution that engineers will need for their 3D simulation grid. Simulation engineers might for example decide up-front for a cell size of 100m*100m by 1m vertically in the 3D simulation grid because it will limit the number of cells to a level manageable by the flow simulator. Experience shows though that it is wiser to select a cell size based on the expected spatial heterogeneity of the reservoir, and especially of the facies. If the reservoir contains geobodies of a few hundred meters width, then it is a good idea to have smaller cell size, maybe at 50m*50m or even less. There will always be time later to upscale the whole 3D geological grid to the resolution the engineers need for their own 3D grid. At that time, the then-built geomodel might even prove that the cell size asked by the engineers might oversimplify the complexity of the reservoir. The upscaling of a geological grid into a simulation grid will be covered in the paper on Flow Simulation and Geomodeling, in two issues from now. For the vertical cell size, we should use the upscaling the well logs as a means to decide what to do.

Facies data and petrophysical data are all physically stored as data along each of the object well of our project. In the meantime, we need to use these data to populate a different physical object: the 3D geological grid. While some geomodeling packages allow running geostatistics with wells directly as input, it is wiser to first upscale the facies and the petrophysical data into the 3D grid (step also known as blocking the well data in some packages – both terms are used hereafter). The 3D geological grid must be refined enough to capture the spatial characteristics of the reservoir, and this starts with respecting the characteristics of the reservoir along the wells. The logs are upscaled to the resolution of the 3D grid and the upscaled logs are compared to the original well logs. If they are close enough, the vertical cell size is good. If the upscaled logs have lost too much important detail shown on the original logs, the 3D grid must be refined vertically.

A typical workflow can go as follows. Firstly, the well data (facies and petrophysics) are analyzed to find the average thickness of the facies. This thickness gives an original vertical cell size. The well data are blocked into the 3D grid and compared to the original well data. At this stage, we can face two situations. If it is matching perfectly, we should go with a coarser cell size (maybe 2m instead of 1m originally) and we repeat the process until we reach a level where the upscaling doesn’t respect the log data well enough anymore. The vertical cell size to use is the last one that worked well: it is the coarser cell size that respects the well data. On the contrary, if there is not a good-enough match, then the original vertical cell size was too coarse, and we need to refine progressively until no improvement can be seen. Once the vertical cell size captures the data resolution very well (ie going any finer doesn’t improve the resolution, while it needlessly increases the number of cells in the model) we have found the cell size we need.

Comparing the original logs to the upscaled values can be done qualitatively or quantitatively. If the project contains only a few wells, displaying original and upscaled values side by side on a well display is a good way to analyze the results. A snapshot of such a display can help to explain this process in a report. When the project contains many wells, such qualitative approach becomes tedious if not impossible. An alternative is to compare the statistics of the original logs with the statistics of the upscaled values. If the upscaling worked well, then the original and the upscaled logs should have similar distributions and so similar percentiles, mean and standard deviation. Such analysis is enough to validate the vertical cell size. Nevertheless, it might be more interesting for your team that you analyze also the statistics of the meta-data of each well. Under the term meta-data, we are grouping any type of computation your team has done on the well logs, independently from your work on the geomodel. Maybe a net-pay thickness has been computed at each well as well as a net porous thickness and an oil-column thickness. Those numbers computed at each well might have already been used by your team to make decisions about the next steps of the whole project. Proving that the blocked well data does respect these crucial results is a great way to have your team support your geomodeling project.

Many mathematical techniques exist to block the well data. Usually, the vertical cell size (25cm to 1m) is coarser than the log resolution (10cm if not less). So all these techniques correspond to some sort of averaging. For discrete properties like facies, the most common approach is to keep the facies which was “the most preponderant”. For example, if in a given cell of 50cm height, the well shows 40cm of Sand and 10cm of Shale, it makes sense to assign the facies Sand to this cell. Is the “loss” of 10cm of Shale important? As long as it doesn’t impact the statistics along the well, probably not.

Once the facies are blocked, the petrophysical logs are upscaled. It makes sense to do it in this order as each facies usually shows a specific range of values for each petrophysical property. But what values shall we block? Only those associated to the blocked facies or shall we average over all the values, no matter what facies they belonged to? In the previous facies example, shall we defined a blocked (averaged) porosity from the porosity values only from the 40cm of Sand, as Sand was defined as the blocked facies value? Or shall we do the averaging by including also the values from the 10cm of Shale? In the first case, we make sure that a blocked Sand has a value of blocked porosity that belongs to the expected range of porosity for this type of facies. But by doing so, we have increased the average porosity in those 50cm. In the second case, we use all the values, so the averaged porosity is a closer representation of the spread of porosity along these 50 cm. But we have now a “dirty” blocked Sand with a value of porosity belonging neither to a Sand nor to a Shale. Each approach has its pros and cons. To get past this problem, we suggest the following approach. Firstly, apply the idea that if a Sand is the blocked facies in a cell, then the average values of the petrophysical properties are defined from the portion of the well which was a Sand to start with. Then analyze your statistics. If everything is well and fine, you have made the right call. If the reservoir is laminated though and many blocked cells shows that the input facies log is close to 50/50 percent of Shale and Sand, the cell size might simply not be appropriate: you need a finer resolution. For an extremely laminated reservoir, this fine resolution might not be a valid option though, unless you are ready to work with a 3D geological grid made of tens if not hundreds of millions cells. An alternative is to revise the facies description on the wells and see if all the zones showing lamination of Sand/Shale can’t be renamed as a third facies Laminated Sand or Shaly Sand (for example). This new facies is understood to be a mixed of Sand and Shale and as such it shows petrophysical values being averages of those found in pure Sand and pure Shale facies. In this example, moving from a 2-facies classification to a 3-facies classification allows going around our problem. The cells with 50/50 Sand/Shale are now 100% made of this third facies. Once the new classification is applied, we can go back to upscaling the petrophysical logs by facies.

With modern geomodeling packages, it is possible to run the process of well upscaling in a few minutes. Nevertheless, we recommend that you work on this step with great care as any error here will be felt everywhere for the remaining of the project. This step should be used to validate the vertical cell size as well as optimizing the blocking. It makes sense for the geomodeler and the petrophysicist to work together on this step, defining the objectives, which statistics to look at, which numbers to match, and ultimately proving to the team that the blocked data are a very good starting point to both 3D facies and petrophysical modeling in the geomodeling project.

Table of contents

Introduction

Chapter 1 - Overview of the Geomodeling Workflow

Chapter 2 - Geostatistics

Chapter 3 - Geologists and Geomodeling

Chapter 4 - Petrophysicists and Geomodeling

Chapter 5 - Geophysicists and Geomodeling

Chapter 6 - Reservoir Engineers and Geomodeling

Chapter 7 - Reserve Engineers and Geomodeling

Chapter 8 - to be published in the summer 2019

To be published mid-March 2018

Chapter 9 - to be published in the summer 2019

To be published mid-March 2018

References

Follow us

Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod.