This tutorial will guide you on doing resource estimation with PyGSLIB. The informing data is from the BABBITT zone of the KEWEENAWAN DULUTH COMPLEX.
The sequence of data preparation and estimation used in this example is as it follows:
To run this demo in your computer download the the demo files.
In this tutorial we will use the development version. You will have to install pygslib by compiling the source code.
git clone https://github.com/opengeostat/pygslib.git
cd pygslib
python setup.py build
python setup.py install
To compile in Windows we recommend reading the wiki Before compiling in Windows. This explain how install prerequisites: MinGW-w64, MSVC, among other dependencies. Compiling in Linux or in OSX is way more easy.
In [1]:
# import third party python libraries
import pandas as pd
import matplotlib.pylab as plt
import numpy as np
# make plots inline
%matplotlib inline
# later try %matplotlib notebook
#%matplotlib notebook
In [2]:
# import pygslib
import pygslib
Need some help? Just type
help(pygslib)
There is also an online manual at https://opengeostat.github.io/pygslib/ but is not updated
PyGSLIB requires drillholes tables loaded into Pandas DataFrames. Two tables are compulsory:
In addition, you may have any number of optional interval tables with the compulsory fields BHID, FROM and TO
In [3]:
# importing drillhole tables into pandas dataframes
collar = pd.read_csv('Babbitt/collar_BABBITT.csv')
survey = pd.read_csv('Babbitt/survey_BABBITT.csv')
assay = pd.read_csv('Babbitt/assay_BABBITT.csv')
In [4]:
# print first 3 lines of the table collar
collar.head(4)
Out[4]:
In [5]:
# print first 3 lines of the table survey
survey.head(3)
Out[5]:
In [6]:
# print first 3 lines of the table assay
assay.head(3)
Out[6]:
Pandas provides a large set of functions to modify your data. Let's remove some columns and make non-assayed intervals equal to zero.
In [7]:
# droping some columns
assay.drop(['NI','S','FE'], axis=1, inplace=True)
# making non-sampled intervals equal to zero
assay.loc[~np.isfinite(assay['CU']), 'CU']=0
In [8]:
#creating a drillhole object
mydholedb=pygslib.drillhole.Drillhole(collar=collar, survey=survey)
# now you can add as many interval tables as you want, for example, assays, lithology and RQD.
mydholedb.addtable(assay, 'assay', overwrite = False)
The output above is a warning message. This one is a complain because the field LENGTH
was not included in the collar table. You will see similar warnings any time PyGSLIB detects a potential issue in your data.
In [9]:
# validating a drillhole object
mydholedb.validate()
The warning above is serious. There are drillholes with only one survey record and to desurvey we need at least two records, the first one may be at the collar of the drillhole.
In [10]:
# fixing the issue of single interval at survey table
mydholedb.fix_survey_one_interval_err(90000.)
Note: To validate interval tables you may use the function validate_table
.
In [11]:
#validating interval tables
mydholedb.validate_table('assay')
In [12]:
# Calculating length of sample intervals
mydholedb.table['assay']['Length']= mydholedb.table['assay']['TO']-mydholedb.table['assay']['FROM']
# plotting the interval lengths
mydholedb.table['assay']['Length'].hist(bins=np.arange(15)+0.5)
# printing length mode
print ('The Length Mode is:', mydholedb.table['assay']['Length'].mode()[0])
Most samples (the mode) are 10 ft length. This value or any of its multiples are good options for composite length, they minimize the oversplitting of sample intervals.
In [13]:
# compositing
mydholedb.downh_composite('assay', variable_name= "CU", new_table_name= "CMP",
cint = 10, minlen=-1, overwrite = True)
In [14]:
# first 5 rows of a table
mydholedb.table["CMP"].tail(5)
Out[14]:
Note that some especial fields were created, those fields have prefix _
. _acum
is the grade accumulated in the composite interval (sum of grades from sample intervals contributing to the composite interval) and _len
is the actual length of the composite.
In the table CMP the interval at row 54188 has FROM : 1010.0 and TO: 1020.0 but the sample length is only 7.0 ft. In this way the FROM and TO intervals of any drillhole or table are always at the same position and you can safely use the fields [BHID, FROM] to merge tables.
To plot drillholes in 3D or to estimate grade values you need to calculate the coordinates of the composites. This process is known as desurvey. There are many techniques to desurvey, PyGSLIB uses minimum curvature.
Desurvey will add the fields azm, dipm
and xm, ym, zm
, these are directions and the coordinates at the mid point of composite intervals. You have the option to add endpoint coordinates xb, yb, zb
and xe, ye, ze
, these are required to export drillholes in vtk format.
In [15]:
# desurveying an interval table
mydholedb.desurvey('CMP',warns=False, endpoints=True)
# first 3 rows of a table
mydholedb.table["CMP"].head(3)
Out[15]:
The compiled FORTRAN code of GSLIB is not good with data of type str, sometimes you need to transform the BHID to type int, for example, if you use a maximum number of samples per drillholes on kriging or calculating downhole variograms. The function txt2intID
will do this work for you.
In [16]:
# creating BHID of type integer
mydholedb.txt2intID('CMP')
# first 3 rows of a subtable
mydholedb.table["CMP"][['BHID', 'BHIDint', 'FROM', 'TO']].tail(3)
Out[16]:
In [17]:
# exporting results to VTK
mydholedb.export_core_vtk_line('CMP', 'cmp.vtk', title = 'Drillhole created in PyGSLIB')
This is how it looks in Paraview
Interval tables are stored as a python dictionary of {Table Name : Pandas Dataframes}. To export data to *.csv format use the Pandas function Dataframe.to_csv
. You can also export to any other format supported by Pandas, this is the list of formats supported.
In [18]:
# inspecting interval tables in drillhole object
print ("Table names ", mydholedb.table_mames)
print ("Tables names", mydholedb.table.keys())
print ("table is ", type(mydholedb.table))
In [19]:
# exporting to csv
mydholedb.table["CMP"].to_csv('cmp.csv', index=False)
In [20]:
# importing the a wireframe (this one was created with https://geomodelr.com)
domain=pygslib.vtktools.loadSTL('Babbitt/Mpz.stl')
Only Stereo Lithography (*.STL) and XML VTK Polydata (VTP) file formats are implemented. If your data is in a different format, ej. DXF, you can use a file format converter, my favorite is meshconv
In [21]:
# creating array to tag samples in domain1
inside1=pygslib.vtktools.pointinsolid(domain,
x=mydholedb.table['CMP']['xm'].values,
y=mydholedb.table['CMP']['ym'].values,
z=mydholedb.table['CMP']['zm'].values)
# creating a new domain field
mydholedb.table['CMP']['Domain']=inside1.astype(int)
# first 3 rows of a subtable
mydholedb.table['CMP'][['BHID', 'FROM', 'TO', 'Domain']].head(3)
Out[21]:
In [22]:
# exporting results to VTK
mydholedb.export_core_vtk_line('CMP', 'cmp.vtk', title = 'Generated with PyGSLIB')
# exporting to csv
mydholedb.table["CMP"].to_csv('cmp.csv', index=False)
A section of the wireframe and the drillholes may look as follows
Cu grades will be estimated on blocks inside the mineralized domain. To create those blocks you may:
pygslib.blockmodel.Blockmodel
In PyGSLIB we use percent blocks, similar to GEMS ®. In the future we will implement subcell style, similar to Surpac ®, using Adaptive Mesh Refinement (AMR).
Blocks are stored in the class member bmtable
, this is a Pandas DataFrame with especial field index IJK
or [IX,IY,IZ]
and coordinates [XC, YC, ZC]
. We use GSLIB order, in other words, IJK
is the equivalent of the row number in a GSLIB grid.
Block model tables can be full or partial (with some missing blocks). Only one table will be available in a block model object.
The block model definition is stored in the members nx, ny, nz, xorg, yorg, zorg, dx, dy, dz
. The origin xorg, yorg, zorg
refers to the lower left corner of the lower left block (not the centroid), like in Datamine Studio ®.
In [23]:
# The model definition
xorg = 2288230
yorg = 415200
zorg = -1000
dx = 100
dy = 100
dz = 30
nx = 160
ny = 100
nz = 90
In [24]:
# Creating an empty block model
mymodel=pygslib.blockmodel.Blockmodel(nx,ny,nz,xorg,yorg,zorg,dx,dy,dz)
In [25]:
# filling wireframe with blocks
mymodel.fillwireframe(domain)
# the fillwireframe function generates a field named __in,
# this is the proportion inside the wireframe. Here we rename __in to D1
mymodel.bmtable.rename(columns={'__in': 'D1'},inplace=True)
In [26]:
# creating a partial model by filtering out blocks with zero proportion inside the solid
mymodel.set_blocks(mymodel.bmtable[mymodel.bmtable['D1']> 0])
# export partial model to a vtk unstructured grid (*.vtu)
mymodel.blocks2vtkUnstructuredGrid(path='model.vtu')
Note that fillwireframe
created or overwrited mymodel.bmtable
. The blocks outside the wireframe where filtered out and the final output is a partial model with block inside or touching the wireframe domain.
Note that fillwireframe
works with closed surfaces only.
A section view of the blocks colored by percentage inside the solid and the wireframe (white lines) may look as follows:
You may spend some time doing exploratory data analysis, looking at statistical plots, 3D views and 2D sections of your data. A good comersial software for this is Supervisor ®, open source options are Pandas, Statsmodels, Seaborn and glueviz.
PyGSLIB includes some minimum functionality for statistical plots and calculations, with support for declustering wight. Here we demonstrate how you can do a declustering analysis of the samples in the mineralized domain and how to evaluate the declustered mean. The declustered mean will be compared later with the mean of CU estimates.
Note: In this section we are not including all the statistical analysis usually required for resource estimation.
In [27]:
#declustering parameters
parameters_declus = {
'x' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'xm'],
'y' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'ym'],
'z' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'zm'],
'vr' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'CU'],
'anisy' : 1.,
'anisz' : 0.05,
'minmax' : 0,
'ncell' : 100,
'cmin' : 100.,
'cmax' : 5000.,
'noff' : 8,
'maxcel' : -1}
# declustering
wtopt,vrop,wtmin,wtmax,error, \
xinc,yinc,zinc,rxcs,rycs,rzcs,rvrcr = pygslib.gslib.declus(parameters_declus)
#Plotting declustering optimization results
plt.plot (rxcs, rvrcr, '-o')
plt.xlabel('X cell size')
plt.ylabel('declustered mean')
plt.show()
plt.plot (rycs, rvrcr, '-o')
plt.xlabel('Y cell size')
plt.ylabel('declustered mean')
plt.show()
plt.plot (rzcs, rvrcr, '-o')
plt.xlabel('Z cell size')
plt.ylabel('declustered mean')
plt.show()
In [28]:
# parameters for declustering with the cell size selected
parameters_declus = {
'x' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'xm'],
'y' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'ym'],
'z' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'zm'],
'vr' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'CU'],
'anisy' : 1., # y == x
'anisz' : 0.1, # z = x/20
'minmax' : 0,
'ncell' : 1,
'cmin' : 1000.,
'cmax' : 1000.,
'noff' : 8,
'maxcel' : -1}
# declustering
wtopt,vrop,wtmin,wtmax,error, \
xinc,yinc,zinc,rxcs,rycs,rzcs,rvrcr = pygslib.gslib.declus(parameters_declus)
# Adding declustering weight to a drillhole interval table
mydholedb.table["CMP"]['declustwt'] = 1
mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'declustwt'] = wtopt
# calculating declustered mean
decl_mean = rvrcr[0]
Now we can calculate some declustered stats and plot declustered histogras
In [29]:
# prepare parameters dictionary
parameters = {
'hmin' : None, #in/output rank-0 array(float,'d')
'hmax' : None, #in/output rank-0 array(float,'d')
'ncl' : 30, #int, number of bins
'iwt' : 1, #int, 1 use declustering weight
'ilog' : 1, #int, 1 use logscale
'icum' : 0, #int, 1 use cumulative
'va' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'CU'], # array('d') with bounds (nd)
'wt' : mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'declustwt'], # array('d') with bounds (nd), wight variable (obtained with declust?)
'figure' : None , # a bokeh figure object (Optional: new figure created if None). Set none or undefined if creating a new figure.
'title' : 'Hist Cu', # string. Figure title
'xlabel' : 'Cu (%)', # string. X axis label
'ylabel' : 'f(%)', # string. Y axis label
# visual parameter for the histogram
'color' : 'red', # string with valid CSS colour (https://www.w3schools.com/colors/colors_names.asp), or an RGB(A) hex value, or tuple of integers (r,g,b), or tuple of (r,g,b,a)
'legend': 'Non - Declustered', # string (Optional, default "NA")
'alpha' : 0.5, # float [0-1]. Transparency of the fill colour
'lwidth': 1, # float. Line width
# legend
'legendloc': 'top_left'}
# calculate histogram
stats, fig = pygslib.plothtml.histgplt(parameters)
print ('CV', stats['xcvr'])
print ('Mean', stats['xmen'])
print ('Min', stats['xmin'])
print ('Max', stats['xmax'])
# show the figure
pygslib.plothtml.show(fig)
In [30]:
# plot CDF
parameters_probplt = {
# gslib parameters for histogram calculation
'iwt' : 1, # input boolean (Optional: set True). Use weight variable?
'va' : mydholedb.table["CMP"].loc[(mydholedb.table['CMP']['Domain']==1) & (mydholedb.table['CMP']['CU']>0), 'CU'], # array('d') with bounds (nd)
'wt' : mydholedb.table["CMP"].loc[(mydholedb.table['CMP']['Domain']==1) & (mydholedb.table['CMP']['CU']>0), 'declustwt'], # array('d') with bounds (nd), wight variable (obtained with declust?)
# visual parameters for figure (if a new figure is created)
'figure' : None, # a bokeh figure object (Optional: new figure created if None). Set none or undefined if creating a new figure.
'title' : 'Prob blot', # string (Optional, "Histogram"). Figure title
'xlabel' : 'Cu', # string (Optional, default "Z"). X axis label
'ylabel' : 'P[Cu<c]', # string (Optional, default "f(%)"). Y axis label
'xlog' : 1, # boolean (Optional, default True). If true plot X axis in log sale.
'ylog' : 1, # boolean (Optional, default True). If true plot Y axis in log sale.
# visual parameter for the probplt
'style' : 'cross', # string with valid bokeh chart type
'color' : 'blue', # string with valid CSS colour (https://www.w3schools.com/colors/colors_names.asp), or an RGB(A) hex value, or tuple of integers (r,g,b), or tuple of (r,g,b,a) (Optional, default "navy")
'legend': 'Declustered Cu', # string (Optional, default "NA").
'alpha' : 1, # float [0-1] (Optional, default 0.5). Transparency of the fill colour
'lwidth': 0, # float (Optional, default 1). Line width
# leyend
'legendloc': 'bottom_right'} # float (Optional, default 'top_right'). Any of top_left, top_center, top_right, center_right, bottom_right, bottom_center, bottom_left, center_left or center
results, fig2 = pygslib.plothtml.probplt(parameters_probplt)
# show the plot
pygslib.plothtml.show(fig2)
In [31]:
results
Out[31]:
In [32]:
# TODO:
For estimation you may use the function pygslib.gslib.kt3d
, which is the GSLIB’s KT3D program modified and embedded into python. KT3D now includes a maximum number of samples per drillhole in the search ellipsoid and the estimation is only in the blocks provided as arrays.
The input parameters of pygslib.gslib.kt3d
are defined in a large and complicated dictionary. You can get this dictionary by typing
print pygslib.gslib.kt3d.__doc__
Note that some parameters are optional. PyGSLIB will initialize those parameters to zero or to array of zeros, for example if you exclude the coordinate Z, PyGSLIB will create an array of zeros in its place.
To understand GSLIB’s KT3D parameters you may read the GSLIB user manual or the kt3d gslib program parameter documentation.
Note that in PyGSLIB the parameters nx, ny and nz are only used by superblock search algorithm, if these parameters are arbitrary the output will be correct but the running time may be longer.
In [33]:
# creating parameter dictionary for estimation in one block
kt3d_Parameters = {
# Input Data (Only using intervals in the mineralized domain)
# ----------
'x' : mydholedb.table["CMP"]['xm'][mydholedb.table["CMP"]['Domain']==1].values,
'y' : mydholedb.table["CMP"]['ym'][mydholedb.table["CMP"]['Domain']==1].values,
'z' : mydholedb.table["CMP"]['zm'][mydholedb.table["CMP"]['Domain']==1].values,
'vr' : mydholedb.table["CMP"]['CU'][mydholedb.table["CMP"]['Domain']==1].values,
'bhid' : mydholedb.table["CMP"]['BHIDint'][mydholedb.table["CMP"]['Domain']==1].values, # an interger BHID
# Output (Target)
# ----------
'nx' : nx,
'ny' : ny,
'nz' : nz,
'xmn' : xorg,
'ymn' : yorg,
'zmn' : zorg,
'xsiz' : dx,
'ysiz' : dy,
'zsiz' : dz,
'nxdis' : 5,
'nydis' : 5,
'nzdis' : 3,
'outx' : mymodel.bmtable['XC'][mymodel.bmtable['IJK']==1149229].values, # filter to estimate only on block with IJK 1149229
'outy' : mymodel.bmtable['YC'][mymodel.bmtable['IJK']==1149229].values,
'outz' : mymodel.bmtable['ZC'][mymodel.bmtable['IJK']==1149229].values,
# Search parameters
# ----------
'radius' : 850,
'radius1' : 850,
'radius2' : 250,
'sang1' : -28,
'sang2' : 34,
'sang3' : 7,
'ndmax' : 12,
'ndmin' : 4,
'noct' : 0,
'nbhid' : 3,
# Kriging parameters and options
# ----------
'ktype' : 1, # 1 Ordinary kriging
'idbg' : 1, # 0 no debug
# Variogram parameters
# ----------
'c0' : 0.35 * 0.109758094158, # we require not normalized variance for GCOS, fix... multiply for actual variance
'it' : [2,2],
'cc' : [0.41*0.109758094158,0.23*0.109758094158],
'aa' : [96,1117],
'aa1' : [96,1117],
'aa2' : [96,300],
'ang1' : [-28,-28],
'ang2' : [ 34, 34],
'ang3' : [ 7, 7]}
The variogram was calculated and modelled in a differnt software
The variogram types are as explained in http://www.gslib.com/gslib_help/vmtype.html, for example, 'it' : [2,2]
means two exponential models, in other words [Exponential 1,Exponential 2]
Only the block with index IJK equal to 1149229 was used this time and 'idbg'
was set to one in order to get a full output of the last (and unique) block estimate, including the samples selected, kriging weight and the search ellipsoid.
In [34]:
# estimating in one block
estimate, debug, summary = pygslib.gslib.kt3d(kt3d_Parameters)
In [35]:
#saving debug to a csv file using Pandas
pd.DataFrame({'x':debug['dbgxdat'],'y':debug['dbgydat'],'z':debug['dbgzdat'],'wt':debug['dbgwt']}).to_csv('dbg_data.csv', index=False)
#pd.DataFrame({'x':[debug['dbgxtg']],'y':[debug['dbgytg']],'z':[debug['dbgztg']],'na':[debug['na']]}).to_csv('dbg_target.csv', index=False)
# save the search ellipse to a VTK file
pygslib.vtktools.SavePolydata(debug['ellipsoid'], 'search_ellipsoid')
The results may look like this in Paraview.
In [36]:
# calculate block variance, wee need it for global change of support validation
# you can also calculate this with the function pygslib.gslib.block_covariance(...)
cbb=debug['cbb']
In [37]:
# update parameter file
kt3d_Parameters['idbg'] = 0 # set the debug of
kt3d_Parameters['outx'] = mymodel.bmtable['XC'].values # use all the blocks
kt3d_Parameters['outy'] = mymodel.bmtable['YC'].values
kt3d_Parameters['outz'] = mymodel.bmtable['ZC'].values
In [38]:
# estimating in all blocks
estimate, debug, summary = pygslib.gslib.kt3d(kt3d_Parameters)
In [39]:
# adding the estimate into the model
mymodel.bmtable['CU_OK'] = estimate['outest']
mymodel.bmtable['CU_ID2'] = estimate['outidpower']
mymodel.bmtable['CU_NN'] = estimate['outnn']
mymodel.bmtable['CU_Lagrange'] = estimate['outlagrange']
mymodel.bmtable['CU_KVar']= estimate['outkvar']
In [40]:
# exporting block model to VTK (unstructured grid)
mymodel.blocks2vtkUnstructuredGrid(path='model.vtu')
# exporting to csv using Pandas
mymodel.bmtable['Domain']= 1
mymodel.bmtable[mymodel.bmtable['CU_OK'].notnull()].to_csv('model.csv', index = False)
In [41]:
print ("Mean in model OK :", mymodel.bmtable['CU_OK'].mean())
print ("Mean in model ID2 :", mymodel.bmtable['CU_ID2'].mean())
print ("Mean in model NN :", mymodel.bmtable['CU_NN'].mean())
print ("Mean in data :", mydholedb.table["CMP"]['CU'][mydholedb.table["CMP"]['Domain']==1].mean())
print ("Declustered mean:", decl_mean)
There are two ways of doing swath plots
We do not have a function in pygslib to do that, but we can implement the second option with one line of pandas
In [42]:
mymodel.bmtable.groupby('XC')[['CU_OK','CU_ID2','CU_NN']].mean().plot()
Out[42]:
In [43]:
mymodel.bmtable.groupby('YC')[['CU_OK','CU_ID2','CU_NN']].mean().plot()
Out[43]:
In [44]:
mymodel.bmtable.groupby('ZC')[['CU_OK','CU_ID2','CU_NN']].mean().plot()
Out[44]:
In [45]:
# Fit anamorphosis by changing, zmax, zmin, and extrapolation function
PCI, H, raw, zana, gauss, z, P, raw_var, PCI_var, fig1 = pygslib.nonlinear.anamor(
z = mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'CU'],
w = mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'declustwt'],
zmin = mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'CU'].min(),
zmax = mydholedb.table["CMP"].loc[mydholedb.table['CMP']['Domain']==1, 'CU'].max(),
zpmin = None, zpmax = None,
ymin=-5, ymax=5,
ndisc = 5000,
ltail=1, utail=4, ltpar=1, utpar=1.5, K=40)
In [46]:
# calculate the support correction coefficient r
r = pygslib.nonlinear.get_r(Var_Zv = cbb, PCI = PCI)
print ('cbb :', cbb)
print ('r :', r)
Note that r is very low...
In [47]:
# fit block anamorphosis
ZV, PV, fig2 = pygslib.nonlinear.anamor_blk( PCI, H, r = r, gauss = gauss, Z = z,
ltail=1, utail=1, ltpar=1, utpar=1,
raw=raw, zana=zana)
In [48]:
cutoff = np.arange(0,0.6, 0.01)
tt = []
gg = []
label = []
# calculate GTC from gaussian in block support
t,ga,gb = pygslib.nonlinear.gtcurve (cutoff = cutoff, z=ZV, p=PV, varred = 1, ivtyp = 0, zmin = 0, zmax = None,
ltail = 1, ltpar = 1, middle = 1, mpar = 1, utail = 1, utpar = 1,maxdis = 1000)
tt.append(t)
gg.append(ga)
label.append('DGM with block support')
In [49]:
fig = pygslib.nonlinear.plotgt(cutoff = cutoff, t = tt, g = gg, label = label)
In [50]:
# to compare global resources with the one estimated we calculate the CDF of the blocks
# cdf of kriging estimate
parameters_probplt = {
'iwt' : 0, #int, 1 use declustering weight
'va' : mymodel.bmtable['CU_OK'][mymodel.bmtable['CU_OK'].notnull()].values, # array('d') with bounds (nd)
'wt' : np.ones(mymodel.bmtable['CU_OK'][mymodel.bmtable['CU_OK'].notnull()].shape[0])} # array('d') with bounds (nd), wight variable (obtained with declust?)
binval_ok,cl_ok,xpt025,xlqt,xmed,xuqt,xpt975,xmin,xmax, \
xcvr,xmen,xvar,error = pygslib.gslib.__plot.probplt(**parameters_probplt)
# cdf of id2
parameters_probplt = {
'iwt' : 0, #int, 1 use declustering weight
'va' : mymodel.bmtable['CU_ID2'][mymodel.bmtable['CU_OK'].notnull()].values, # array('d') with bounds (nd)
'wt' : np.ones(mymodel.bmtable['CU_OK'][mymodel.bmtable['CU_OK'].notnull()].shape[0])} # array('d') with bounds (nd), wight variable (obtained with declust?)
binval_id2,cl_id2,xpt025,xlqt,xmed,xuqt,xpt975,xmin,xmax, \
xcvr,xmen,xvar,error = pygslib.gslib.__plot.probplt(**parameters_probplt)
In [51]:
# calculate GTC ok
t,ga,gb = pygslib.nonlinear.gtcurve (cutoff = cutoff, z=cl_ok, p=binval_ok, varred = 1, ivtyp = 2, zmin = 0, zmax = None,
ltail = 1, ltpar = 1, middle = 1, mpar = 1, utail = 1, utpar = 1,maxdis = 1000)
tt.append(t)
gg.append(ga)
label.append('Ordinary Kriging')
# calculate GTC in block support
t,ga,gb = pygslib.nonlinear.gtcurve (cutoff = cutoff, z=cl_id2, p=binval_id2, varred = 1, ivtyp = 2, zmin = 0, zmax = None,
ltail = 1, ltpar = 1, middle = 1, mpar = 1, utail = 1, utpar = 1,maxdis = 1000)
tt.append(t)
gg.append(ga)
label.append('Inverse of the Distance 2)')
In [52]:
fig = pygslib.nonlinear.plotgt(cutoff = cutoff, t = tt, g = gg, label = label)
In [53]:
# we can plot diferences (relative error in grade)
plt.plot (cutoff, gg[0]-gg[1], label = 'DGM - OK')
plt.plot (cutoff, gg[0]-gg[2], label = 'DGM - ID2')
plt.plot (cutoff, np.zeros(cutoff.shape[0]),'--k', label = 'Zero error')
plt.title('relative error in grade')
plt.legend()
Out[53]:
In [54]:
# we can plot diferences (relative error in tonnage)
plt.plot (cutoff, tt[0]-tt[1], label = 'DGM - OK')
plt.plot (cutoff, tt[0]-tt[2], label = 'DGM - ID2')
plt.plot (cutoff, np.zeros(cutoff.shape[0]),'--k', label = 'Zero error')
plt.legend()
plt.title('relative error in tonnage')
Out[54]:
In [55]:
# To get tonnes right just multiply per total tonnes
# calcullate tottal tonnage (million tonnes)
ttonnes = mymodel.bmtable['D1'][mymodel.bmtable['CU_OK'].notnull()].sum()*100*100*30* 0.0283168 * 2.7 /1000000
# cubic foot to m -> 0.0283168, density 2.7
In [56]:
ttt = tt[0]*ttonnes
#plot
plt.plot(cutoff, ttt)
plt.ylabel('Mt')
plt.xlabel('Cutoff')
Out[56]:
In [ ]:
In [ ]: