In this notebook, we mainly utilize extreme gradient boost to improve the prediction model originially proposed in TLE 2016 November machine learning tuotrial. Extreme gradient boost can be viewed as an enhanced version of gradient boost by using a more regularized model formalization to control over-fitting, and XGB usually performs better. Applications of XGB can be found in many Kaggle competitions. Some recommended tutorrials can be found
Our work will be orginized in the follwing order:
•Background
•Exploratory Data Analysis
•Data Prepration and Model Selection
•Final Results
The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are: •Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10), photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE. •Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1.Nonmarine sandstone
2.Nonmarine coarse siltstone
3.Nonmarine fine siltstone
4.Marine siltstone and shale
5.Mudstone (limestone)
6.Wackestone (limestone)
7.Dolomite
8.Packstone-grainstone (limestone)
9.Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies/ Label/ Adjacent Facies
1 SS 2
2 CSiS 1,3
3 FSiS 2
4 SiSh 5
5 MS 4,6
6 WS 5,7
7 D 6,8
8 PS 6,7,9
9 BS 7,8
After the background intorduction, we start to import the pandas library for some basic data analysis and manipulation. The matplotblib and seaborn are imported for data vislization.
In [1]:
%matplotlib inline
import pandas as pd
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
import matplotlib.colors as colors
import xgboost as xgb
import numpy as np
from sklearn.metrics import confusion_matrix, f1_score, accuracy_score, roc_auc_score
from classification_utilities import display_cm, display_adj_cm
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import validation_curve
from sklearn.datasets import load_svmlight_files
from sklearn.model_selection import StratifiedKFold, cross_val_score, LeavePGroupsOut
from sklearn.datasets import make_classification
from xgboost.sklearn import XGBClassifier
from scipy.sparse import vstack
#use a fixed seed for reproducibility
seed = 123
np.random.seed(seed)
In [2]:
filename = './facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data.head(10)
Out[2]:
Set columns 'Well Name' and 'Formation' to be category
In [3]:
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data.info()
In [4]:
training_data.describe()
Out[4]:
Check distribution of classes in whole dataset
In [5]:
plt.figure(figsize=(5,5))
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00','#1B4F72',
'#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
facies_counts = training_data['Facies'].value_counts().sort_index()
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,title='Distribution of Training Data by Facies')
Out[5]:
Check distribution of classes in each well
In [6]:
wells = training_data['Well Name'].unique()
In [7]:
plt.figure(figsize=(15,9))
for index, w in enumerate(wells):
ax = plt.subplot(2,5,index+1)
facies_counts = pd.Series(np.zeros(9), index=range(1,10))
facies_counts = facies_counts.add(training_data[training_data['Well Name']==w]['Facies'].value_counts().sort_index())
#facies_counts.replace(np.nan,0)
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,title=w)
ax.set_ylim(0,160)
We can see that classes are very imbalanced in each well
In [8]:
plt.figure(figsize=(5,5))
sns.heatmap(training_data.corr(), vmax=1.0, square=True)
Out[8]:
Now we are ready to test the XGB approach, and will use confusion matrix and f1_score, which were imported, as metric for classification, as well as GridSearchCV, which is an excellent tool for parameter optimization.
In [9]:
X_train = training_data.drop(['Facies', 'Well Name','Formation','Depth'], axis = 1 )
Y_train = training_data['Facies' ] - 1
dtrain = xgb.DMatrix(X_train, Y_train)
In [39]:
features = ['GR','ILD_log10','DeltaPHI','PHIND','PE','NM_M','RELPOS']
The accuracy function and accuracy_adjacent function are defined in the following to quatify the prediction correctness.
In [10]:
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
Before processing further, we define a functin which will help us create XGBoost models and perform cross-validation.
In [11]:
skf = StratifiedKFold(n_splits=5)
In [13]:
cv = skf.split(X_train, Y_train)
In [24]:
def modelfit(alg, Xtrain, Ytrain, useTrainCV=True, cv_fold=skf):
#Fit the algorithm on the data
alg.fit(Xtrain, Ytrain,eval_metric='merror')
#Predict training set:
dtrain_prediction = alg.predict(Xtrain)
#dtrain_predprob = alg.predict_proba(Xtrain)[:,1]
#Pring model report
print ("\nModel Report")
print ("Accuracy : %.4g" % accuracy_score(Ytrain,dtrain_prediction))
print ("F1 score (Train) : %f" % f1_score(Ytrain,dtrain_prediction,average='micro'))
#Perform cross-validation:
if useTrainCV:
cv_score = cross_val_score(alg, Xtrain, Ytrain, cv=cv_fold, scoring='f1_micro')
print ("CV Score : Mean - %.7g | Std - %.7g | Min - %.7g | Max - %.7g" %
(np.mean(cv_score), np.std(cv_score), np.min(cv_score), np.max(cv_score)))
#Pring Feature Importance
feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar',title='Feature Importances')
plt.ylabel('Feature Importance Score')
We are going to preform the steps as follows:
1.Choose a relatively high learning rate, e.g., 0.1. Usually somewhere between 0.05 and 0.3 should work for different problems.
2.Determine the optimum number of tress for this learning rate.XGBoost has a very usefull function called as "cv" which performs cross-validation at each boosting iteration and thus returns the optimum number of tress required.
3.Tune tree-based parameters(max_depth, min_child_weight, gamma, subsample, colsample_bytree) for decided learning rate and number of trees.
4.Tune regularization parameters(lambda, alpha) for xgboost which can help reduce model complexity and enhance performance.
5.Lower the learning rate and decide the optimal parameters.
In order to decide on boosting parameters, we need to set some initial values of other parameters. Lets take the following values:
1.max_depth = 5
2.min_child_weight = 1
3.gamma = 0
4.subsample, colsample_bytree = 0.8 : This is a commonly used used start value.
5.scale_pos_weight = 1
Please note that all the above are just initial estimates and will be tuned later. Lets take the default learning rate of 0.1 here and check the optimum number of trees using cv function of xgboost. The function defined above will do it for us.
In [15]:
xgb1= XGBClassifier(
learning_rate=0.05,
objective = 'multi:softmax',
nthread = 4,
seed = seed
)
In [16]:
xgb1
Out[16]:
In [25]:
modelfit(xgb1, X_train, Y_train)
In [26]:
param_test1={
'n_estimators':range(20, 100, 10)
}
gs1 = GridSearchCV(xgb1,param_grid=param_test1,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs1.fit(X_train, Y_train)
gs1.grid_scores_, gs1.best_params_,gs1.best_score_
Out[26]:
In [27]:
gs1.best_estimator_
Out[27]:
In [29]:
param_test2={
'max_depth':range(5,16,2),
'min_child_weight':range(1,15,2)
}
gs2 = GridSearchCV(gs1.best_estimator_,param_grid=param_test2,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs2.fit(X_train, Y_train)
gs2.grid_scores_, gs2.best_params_,gs2.best_score_
Out[29]:
In [30]:
gs2.best_estimator_
Out[30]:
In [31]:
modelfit(gs2.best_estimator_, X_train, Y_train)
In [32]:
param_test3={
'gamma':[0,.05,.1,.15,.2,.3,.4],
'subsample':[0.6,.7,.75,.8,.85,.9],
'colsample_bytree':[i/10.0 for i in range(4,10)]
}
gs3 = GridSearchCV(gs2.best_estimator_,param_grid=param_test3,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs3.fit(X_train, Y_train)
gs3.grid_scores_, gs3.best_params_,gs3.best_score_
Out[32]:
In [33]:
gs3.best_estimator_
Out[33]:
In [34]:
modelfit(gs3.best_estimator_,X_train,Y_train)
In [35]:
param_test4={
'reg_alpha':[0, 1e-5, 1e-2, 0.1, 0.2],
'reg_lambda':[0, .25,.5,.75,.1]
}
gs4 = GridSearchCV(gs3.best_estimator_,param_grid=param_test4,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs4.fit(X_train, Y_train)
gs4.grid_scores_, gs4.best_params_,gs4.best_score_
Out[35]:
In [36]:
modelfit(gs4.best_estimator_,X_train, Y_train)
In [37]:
gs4.best_estimator_
Out[37]:
In [39]:
param_test5={
'reg_alpha':[.15,0.2,.25,.3,.4],
}
gs5 = GridSearchCV(gs4.best_estimator_,param_grid=param_test5,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs5.fit(X_train, Y_train)
gs5.grid_scores_, gs5.best_params_,gs5.best_score_
Out[39]:
In [42]:
modelfit(gs5.best_estimator_, X_train, Y_train)
In [43]:
gs5.best_estimator_
Out[43]:
In [44]:
xgb4 = XGBClassifier(
learning_rate = 0.025,
n_estimators=120,
max_depth=7,
min_child_weight=7,
gamma = 0.05,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.2,
reg_lambda =0.75,
objective='multi:softmax',
nthread =4,
seed = seed,
)
modelfit(xgb4,X_train, Y_train)
In [47]:
xgb5 = XGBClassifier(
learning_rate = 0.00625,
n_estimators=480,
max_depth=7,
min_child_weight=7,
gamma = 0.05,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.2,
reg_lambda =0.75,
objective='multi:softmax',
nthread =4,
seed = seed,
)
modelfit(xgb5,X_train, Y_train)
Next we use our tuned final model to do cross validation on the training data set. One of the wells will be used as test data and the rest will be the training data. Each iteration, a different well is chosen.
In [48]:
# Load data
filename = './facies_vectors.csv'
data = pd.read_csv(filename)
# Change to category data type
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 )
X_train_nowell = X_train.drop(['Well Name'], axis=1)
Y_train = data['Facies' ] - 1
# Final recommended model based on the extensive parameters search
model_final = gs5.best_estimator_
model_final.fit( X_train_nowell , Y_train , eval_metric = 'merror' )
Out[48]:
In [49]:
# Leave one well out for cross validation
well_names = data['Well Name'].unique()
f1=[]
for i in range(len(well_names)):
# Split data for training and testing
train_X = X_train[X_train['Well Name'] != well_names[i] ]
train_Y = Y_train[X_train['Well Name'] != well_names[i] ]
test_X = X_train[X_train['Well Name'] == well_names[i] ]
test_Y = Y_train[X_train['Well Name'] == well_names[i] ]
train_X = train_X.drop(['Well Name'], axis = 1 )
test_X = test_X.drop(['Well Name'], axis = 1 )
# Train the model based on training data
# Predict on the test set
predictions = model_final.predict(test_X)
# Print report
print ("\n------------------------------------------------------")
print ("Validation on the leaving out well " + well_names[i])
conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) )
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("-F1 Score: %.6f" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) ))
f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ))
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
print ("\nConfusion Matrix Results")
from classification_utilities import display_cm, display_adj_cm
display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True)
print ("\n------------------------------------------------------")
print ("Final Results")
print ("-Average F1 Score: %6f" % (sum(f1)/(1.0*len(f1))))
Use final model to predict the given test data set
In [50]:
# Load test data
test_data = pd.read_csv('validation_data_nofacies.csv')
test_data['Well Name'] = test_data['Well Name'].astype('category')
X_test = test_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
# Predict facies of unclassified data
Y_predicted = model_final.predict(X_test)
test_data['Facies'] = Y_predicted + 1
# Store the prediction
test_data.to_csv('Prediction4.csv')
In [51]:
test_data[test_data['Well Name']=='STUART'].head()
Out[51]:
In [52]:
test_data[test_data['Well Name']=='CRAWFORD'].head()
Out[52]:
In [57]:
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
In [56]:
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
In [58]:
make_facies_log_plot(
test_data[test_data['Well Name'] == 'STUART'],
facies_colors)