Pore Scale Imaging and Modeling Section I

In this project, we have selected a comprehensive paper related to pore scale imaging and modeling. The goal of this example is to investigate the permeability of different rock samples. As there are different samples, we just put the general code here which can be applicable for other samples as well. Therefore, the results will be given in figures.

The structure of this report goes as follows:

  • Pore Newtork Extraction Method
  • Applying Stokes flow for permeability estimation

In [14]:
%%html
<style>
table {float:left}
</style>


Pore Newtork Extraction Method

In this project, we have used SNOW algorithm in Porespy which is a network extraction method based on marker-based watershed segmentation. The SNOW algorithm concludes four main steps:

  • Prefiltering the distance map
  • Eliminating peaks on saddles and plateaus
  • Merging peaks that are too near each other
  • Assigning void voxels to the appropriate pore using a marker-based watershed.

Effect of prefiltering parameters

In the first step, use of right parameters for filtering may enhance the reliablity of the results. We use a gaussian filter with a spherical structuring element of radius R. The sigma or standard deviation of the convolution kernel is an adjustable parameter, the effect of which can be studied with the following code. Another parameter to be considered is the radius R, which is also investigated for the same sample. Choosing the right value affects the smoothness of the resulting partitioned regions. In other words, this will prevent oversmoothing and loss of a great amount of data from the original image. There is a trade off between preserving the data and filtering. We should find an optimum point for this parameters. The idea have been shown in Fig.4 of the paper. We have used the same idea to change the snowpartitioning algorithm so that we can have our desired output for this part. As long as Network extraction will take more time, we first investigate the effect of choosing different R and sigma as a preprocess, then use the righ parameters for network extraction and call SNOW algorithm.

The following piece of code is related to this prefiltering step (this is a part of the whole code which is related to prefiltering)Changes in the filtering functions so that we can have the initial and final number of local maxima in a dictionarry array resultsB:


In [4]:
def snow_partitioning_test(im, r_max=4, sigma=0.4, return_all=False):
    tup = namedtuple('results', field_names=['im', 'dt', 'peaks', 'regions'])
    results = {
        'r_max': r_max, 'sigma': sigma,
        'Initial number of peaks:': [],
        'Peaks after trimming saddle points:': [],
        'Peaks after trimming nearby peaks:':[]
    }
    print('-' * 80)
    print("Beginning SNOW Algorithm")
    im_shape = np.array(im.shape)
    if im.dtype == 'bool':
        print('Peforming Distance Transform')
        if np.any(im_shape == 1):
            ax = np.where(im_shape == 1)[0][0]
            dt = spim.distance_transform_edt(input=im.squeeze())
            dt = np.expand_dims(dt, ax)
        else:
            dt = spim.distance_transform_edt(input=im)
    else:
        dt = im
        im = dt > 0

    tup.im = im
    tup.dt = dt

    if sigma > 0:
        print('Applying Gaussian blur with sigma =', str(sigma))
        dt = spim.gaussian_filter(input=dt, sigma=sigma)

    peaks = find_peaks(dt=dt, r_max=r_max)
    print('Initial number of peaks: ', spim.label(peaks)[1])
    resultsB['Initial number of peaks:']=spim.label(peaks)[1]
    peaks = trim_saddle_points(peaks=peaks, dt=dt, max_iters=500)
    print('Peaks after trimming saddle points: ', spim.label(peaks)[1])
    resultsB['Peaks after trimming saddle points:']=spim.label(peaks)[1]
    peaks = trim_nearby_peaks(peaks=peaks, dt=dt)
    peaks, N = spim.label(peaks)
    print('Peaks after trimming nearby peaks: ', N)
    resultsB['Peaks after trimming nearby peaks:']=N
    tup.peaks = peaks
    regions = watershed(image=-dt, markers=peaks, mask=dt > 0)
    regions = randomize_colors(regions)
    if return_all:
        tup.regions = regions
        return tup
    else:
        return results

In [5]:
imageinit = im
Resultslast = {}
R_max = [2,4,6,8,12,15,20]
Sigmax = [0.25,0.35,0.5,0.65]
c = -1
for j in range(len(Sigmax)):
    for i in range(len(R_max)):
        c = c+1
        r_max = R_max[i]
        sigma = Sigmax[j]
        results = snow_partitioning(im=imageinit,r_max=r_max, sigma=sigma, return_all=False)
        Resultslast[c] = results


---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-5-d2f3ea2d7be1> in <module>
----> 1 imageinit = im
      2 Resultslast = {}
      3 R_max = [2,4,6,8,12,15,20]
      4 Sigmax = [0.25,0.35,0.5,0.65]
      5 c = -1

NameError: name 'im' is not defined

Marching Cube Algorithm

Based on new porespy package, there is also some changes in SNOW algorithm previous version. In the previous version the area was estimated as the number of voxels on the surface multiplied by the area of one voxel face. Now the user can have the chance to use Marching Cube algorithm. The idea ofd the algorithm is to find what portion of the cube is inside the image by using a triangular mesh marching through the cube to fine the best interface between inner and outer part of the image. Generally speaking this will decrease the voxelated representation of the image which itself increase the accuracy of the calculations. In the voxel based surface area calculation, we assign the whole voxel to the surface even though only half of that voxel might be within the surface. So it may lead to overestimation. It may make the process slower, but provides better results. To understand the algorithm, we have shown here a 2D example. Imagine an aritrary shaped image. If we mesh the area with a square mesh (representative as pixels which will be cubes in 3D as voxels), we have the follwing image:

The red corners are within the image, the blue ones are outside. Each square which has not 4 same color corner will be marched until get the most precise triangular mesh as the boundary. First the purple dots locate the center of each edge, which we know this is a rough estimation. Then the connecting line (surface in 3D) will march through the square area so that finds its way through the boundary at an optimum location. Implementation of the algorithm in the 3D follows the same idea. The following picture is a sketch of 3D implementation.

Although this option will give better results, we can still turn it off in SNOW algorithm for the sake of time efficiency and still have good results.

Validation of the code

To ensure that our scriopt for the network extraction is correcrt, we first implemented the same code on Berea Sandstone, the validity of which can be prooved by comparing the results given in the paper. We have additional boundary face pores, but internal pores are approximately the sam as that of SNOW paper.

permeabilities are: 1.20607725e-12, 1.0525892e-12, 1.18140011e-12

average permeability is: 1.1466888534117068e-12

The results are very close to the SNOW paper (which was 1.29e-12 from Image Analysis) . This will ensure us about our script written for the network extraction and permeability calculation.

Extracted Networks

The following figures illustrate one segment of CT images of rock samples (MG and Bentheimer) in binarized version:

Sample Size Resolution Porosity
Mount Gambier (our model) 512 512 512 3.024  μm 0.436
Mount Gambier (paper) 350 350 350 9  μm 0.556
Bentheimer Sandstone (our model) 300 300 300 3   μm 0.2
Bentheimer Sandstone (paper) 1000 1000 1000 3.0035  μm 0.217

The following code is the script we have written for MG sample. The same code have been applied on other samples.


In [15]:
import porespy as ps
import matplotlib.pyplot as plt
import openpnm as op
import numpy as np
import scipy as sp
ws = op.Workspace()
ws.clear()
ws.keys()
proj = ws.new_project()
from skimage import io
im = io.imread('MG.tif')
imtype=im.view()
print(imtype)
digits = np.prod(np.array(im.shape))
logi = (np.sum(im==0)+np.sum(im==1))==digits
if logi == True:
    print('There is no noise')
else:
    print('Please check your input image for noise')
print(im.shape)
imtype = im.view()
print(imtype)
im = np.array(im, dtype=bool)
# Inversion of 0s and 1s in binarized image to represent 1 for pores and 0 for solids
im = ~im 
print(ps.metrics.porosity(im))
plt.imshow(ps.visualization.sem(im), cmap=plt.cm.bone)  
net = ps.network_extraction.snow(im, voxel_size=3.024e-6,
         boundary_faces=['top', 'bottom', 'left', 'right', 'front', 'back'],
         marching_cubes_area=False) # voxel size and marching cube can be changed for each specific sample
pn = op.network.GenericNetwork()
pn.update(net)
print(pn)
a = pn.check_network_health()
op.topotools.trim(network=pn,pores=a['trim_pores'])
print(pn)
ps.io.to_vtk(path='MGvt',im=im.astype(sp.int8))
mgr = op.Workspace()
# The generated .pnm file will be used as input for simulations (permeability calculation, etc.)
mgr.save_workspace('MountGampn.pnm')


---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-15-b35ab0bf634a> in <module>
      9 proj = ws.new_project()
     10 from skimage import io
---> 11 im = io.imread('MG.tif')
     12 imtype=im.view()
     13 print(imtype)

~/anaconda3/envs/pmeal/lib/python3.7/site-packages/skimage/io/_io.py in imread(fname, as_gray, plugin, **plugin_args)
     46 
     47     with file_or_url_context(fname) as fname:
---> 48         img = call_plugin('imread', fname, plugin=plugin, **plugin_args)
     49 
     50     if not hasattr(img, 'ndim'):

~/anaconda3/envs/pmeal/lib/python3.7/site-packages/skimage/io/manage_plugins.py in call_plugin(kind, *args, **kwargs)
    208                                (plugin, kind))
    209 
--> 210     return func(*args, **kwargs)
    211 
    212 

~/anaconda3/envs/pmeal/lib/python3.7/site-packages/skimage/io/_plugins/tifffile_plugin.py in imread(fname, dtype, **kwargs)
     46 
     47     # read and return tiff as numpy array
---> 48     with TiffFile(fname, **kwargs_tiff) as tif:
     49         return tif.asarray(**kwargs)

~/anaconda3/envs/pmeal/lib/python3.7/site-packages/skimage/external/tifffile/tifffile.py in __init__(self, arg, name, offset, size, multifile, multifile_close, pages, fastij, is_ome)
   1328 
   1329         self._fh = FileHandle(arg, mode='rb',
-> 1330                               name=name, offset=offset, size=size)
   1331         self.offset_size = None
   1332         self.pages = []

~/anaconda3/envs/pmeal/lib/python3.7/site-packages/skimage/external/tifffile/tifffile.py in __init__(self, file, mode, name, offset, size)
   3517         self._close = True
   3518         self.is_file = False
-> 3519         self.open()
   3520 
   3521     def open(self):

~/anaconda3/envs/pmeal/lib/python3.7/site-packages/skimage/external/tifffile/tifffile.py in open(self)
   3528             self._file = os.path.realpath(self._file)
   3529             self._dir, self._name = os.path.split(self._file)
-> 3530             self._fh = open(self._file, self._mode)
   3531             self._close = True
   3532             if self._offset is None:

FileNotFoundError: [Errno 2] No such file or directory: '/home/amin/Code/OpenPNM/examples/paper_recreations/MG.tif'

Now that we ensure the validity of our script, we implement the network extraction on the samples of the study. Their network properties are given in the following table:

Model Number of pores Number of throats Volume (mm3) Coordination number
Mount Gambier Carbonate (512) 5780 (4679 internal) 10128 (9027 internal) 27.65 3.504
MG Paper (350) 22665 (257 elements isolated) 84593 31.3 7.41
Bentheimer Sandstone (1000) 26588 (23329 internal) 48911 (45652 internal) 27.1 3.68
Bentheimer Paper (300) Not given Not given 19.68 Not given

Some Comments:

As shown in the table, we have a good match on the average coordination numbers, but the number of pores and throats are different. This is related to the difference between SNOW and maximall method which have been done in the ICL. The snow algorithm will have larger pores which decreases the number of pores and throats.

The porosity is being calculated from the voxelated image in a similar manner of the paper. The permeabilities have been calculated using stokes flow algorithm. The difference might be related to the error which lays behind the parameters in filtering process (sigma, R). We have used default values of sigma=0.4 and R=5 in all samples, which may lead to misrepresentation of the network.

The difference in permeability may also be related to the different conduit lengths. In the Blunt's paper they have defined a shape factor to account for the non-cylindrical deviation of the throats. This shape factor is whithin the maximal extraction method. In the SNOW algorithm, using the equivalent diameter rather than inscribed diameter for the hydraulic conductance (assumming no P loss in the pores) will provide better results in the permeability calculation.

From the Berea sandstone results, we can also comment on the effect of the structures of the rock sample. For sandstones, the morphology is more ideal than carbonates for network extractions. We also get a good result for Bentheimer Sandstone permeability. But for the carbonate cases, it is different. As we see in their CT images, there are Fossil grains (Pebbles in ketton, other fossil shells in two other sample) which provide different length scales of micro to macro pores. For example it is recommended to use multiscale pore network extraction.

As long as not any of our sample is the same sample in the Blunt's paper (they are from the same rock but different resolution and size), the slight difference in results is acceptable.

Isolated pores and throats will be trimmed using "topotools" trimming method after the network extraction.

-For the permeability calculation, we need to set inlets and outlets of the media, both of which can be defined by introducing some pores as boundary surface pores.

Static parameters assignment

We redefine some parameters of the network by deleting them from the pn dictionary and adding models for them in the geomety:


In [ ]:
import openpnm as op
import numpy as np
import matplotlib.pyplot as plt
import math
import random
from pathlib import Path
mgr = op.Workspace()
mgr.clear()
mgr.keys()
path = Path('../fixtures/PoreScale Imaging/MountGampn.pnm')
mgr.load_workspace(path)
pn = mgr['sim_03']['net_01']
a = pn.check_network_health()
op.topotools.trim(network=pn,pores=a['trim_pores'])
proj = pn.project
print(pn)
coord_num_avg=np.mean(pn.num_neighbors(pores=pn.Ps))
del pn['pore.area']
del pn['throat.conduit_lengths.pore1']
del pn['throat.conduit_lengths.pore2']
del pn['throat.conduit_lengths.throat']
del pn['throat.endpoints.tail']
del pn['throat.endpoints.head']
del pn['throat.volume']

In this section we implement the assignment of Geometry, Phase, and Physics to the Network.


In [ ]:
geom = op.geometry.GenericGeometry(network=pn, pores=pn['pore.all'], throats=pn['throat.all'],project=proj)
geom.add_model(propname='throat.endpoints',
                model=op.models.geometry.throat_endpoints.spherical_pores)
geom.add_model(propname='pore.area',
                model=op.models.geometry.pore_area.sphere)
geom.add_model(propname='throat.volume',
                model=op.models.geometry.throat_volume.cylinder)
geom.add_model(propname='throat.conduit_lengths',
                model=op.models.geometry.throat_length.conduit_lengths)
oil = op.phases.GenericPhase(network=pn,project=proj)
water = op.phases.GenericPhase(network=pn,project=proj)
oil['pore.viscosity']=0.547e-3
oil['throat.contact_angle'] =180
oil['throat.surface_tension'] = 0.072
oil['pore.surface_tension']=0.072
oil['pore.contact_angle']=180
water['throat.contact_angle'] = 0 # first assumming highly water-wet
water['pore.contact_angle'] = 0
water['throat.surface_tension'] = 0.0483
water['pore.surface_tension'] = 0.0483
water['pore.viscosity']=0.4554e-3
phys_water= op.physics.GenericPhysics(network=pn, phase=water, geometry=geom,project=proj)
phys_oil = op.physics.GenericPhysics(network=pn, phase=oil, geometry=geom,project=proj)

mod = op.models.physics.hydraulic_conductance.hagen_poiseuille
phys_oil.add_model(propname='throat.hydraulic_conductance',
                              model=mod)
phys_oil.add_model(propname='throat.entry_pressure',
                              model=op.models.physics.capillary_pressure.washburn)
phys_water.add_model(propname='throat.hydraulic_conductance',
                              model=mod)
phys_water.add_model(propname='throat.entry_pressure',
                              model=op.models.physics.capillary_pressure.washburn)

Permeability Calculation Algorithm

The StokesFlow class is for simulation of viscous flow. In this class default property names will be set. The main role of this class would be calculation of the hydraulic permeability. Having its effective permeability calculation method, it can deal with nonuniform medias.

We first find single phase permeability where the stokes flow is implemented for each phase as if it is the only phase flowing through the porous media. Theis is done as the conductance is the hydraulic conductance. Otherwise, it will change to multiphase conduit conductance. Note that we have defined perm_water and perm_oil in a dictionary so that we have a permeability tensor (directional permeability).

As we have mentioned the permeability will be a tensor, which represents $K_x,K_y,K_z$. Permeability tensor plays an important role in anisotropic medias charactarization. We have also defined relative permeabilities in three directions. We only show the relative permeabilities for one direction in the report, but the code gives us the results for all three directions in the oil and water perm dictionary.

We also define methods in which the domain length and area will be calculated. These methods are called within the permeability calculation loop.


In [ ]:
K_water_single_phase = [None,None,None]
K_oil_single_phase = [None,None,None]
bounds = [ ['top', 'bottom'], ['left', 'right'],['front', 'back']]

[amax, bmax, cmax] = np.max(pn['pore.coords'], axis=0)
[amin, bmin, cmin] = np.min(pn['pore.coords'], axis=0)
lx = amax-amin
ly = bmax-bmin
lz = cmax-cmin
da = lx*ly
dl = lz

def top_b(lx,ly,lz):
    da = lx*ly
    dl = lz
    res_2=[da,dl]
    return res_2

def left_r(lx,ly,lz):
    
    da = lx*lz
    dl = ly
    res_2=[da,dl]
    return res_2

def front_b(lx,ly,lz):
    da = ly*lz
    dl = lx
    res_2=[da,dl]
    return res_2

options = {0 : top_b(lx,ly,lz),1 : left_r(lx,ly,lz),2 : front_b(lx,ly,lz)}

for bound_increment in range(len(bounds)):
    BC1_pores = pn.pores(labels=bounds[bound_increment][0])
    BC2_pores = pn.pores(labels=bounds[bound_increment][1])
    [da,dl]=options[bound_increment]
    
    # Permeability - water
    sf_water = op.algorithms.StokesFlow(network=pn, phase=water)
    sf_water.setup(conductance='throat.hydraulic_conductance')
    sf_water._set_BC(pores=BC1_pores, bctype='value', bcvalues=100000)
    sf_water._set_BC(pores=BC2_pores, bctype='value', bcvalues=1000)
    sf_water.run()
    K_water_single_phase[bound_increment] = sf_water.calc_effective_permeability(domain_area=da,
                                                                                 domain_length=dl,
                                                                                 inlets=BC1_pores,
                                                                                 outlets=BC2_pores)
    proj.purge_object(obj=Stokes_alg_single_phase_water)
    
    # Permeability - oil
    sf_oil = op.algorithms.StokesFlow(network=pn, phase=oil)
    sf_oil.setup(conductance='throat.hydraulic_conductance')
    sf_oil._set_BC(pores=BC1_pores, bctype='value', bcvalues=1000)
    sf_oil._set_BC(pores=BC2_pores, bctype='value', bcvalues=0)
    sf_oil.run()
    K_oil_single_phase[bound_increment] = sf_oil.calc_effective_permeability(domain_area=da,
                                                                             domain_length=dl,
                                                                             inlets=BC1_pores,
                                                                             outlets=BC2_pores)
    proj.purge_object(obj=Stokes_alg_single_phase_oil)

Results for permeability calculation of four samples are given in the following. As we see the results for Bentheimer which is a sand stone rock is very close to the value given in the paper. We have also adjusted the permeabilities of the MGambier by using equivalent diameter instead of pore and throat diameter for the conductance calculation.

Sample Mount Gambier 512 Bentheimer 1000
K1 (e-12) 18.93 1.57
K2 (e-12) 23.96 1.4
K3 (e-12) 12.25 1.64
Kavg 18.38 1.53
Sample paper Mount Gambier 350 Bentheimer 300
Kavg (from image) 19.2 1.4