In this python notebook we propose a facies classification model, building on the simple Neural Network solution proposed by LA_Team in order to outperform the prediction model proposed in the predicting facies from well logs challenge.
Given the limited size of the training data set, Deep Learning is not likely to exceed the accuracy of results from refined Machine Learning techniques (such as Gradient Boosted Trees). However, we chose to use the opportunity to advance our understanding of Deep Learning network design, and have enjoyed participating in the contest. With a substantially larger training set and perhaps more facies ambiguity, Deep Learning could be a preferred approach to this sort of problem.
We use three key innovations:
Since our submission #2 we have:
... and since our submission #3 we have:
... and since our submission #4 we have:
The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
The nine discrete facies (classes of rocks) are:
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies | Label | Adjacent Facies |
---|---|---|
1 | SS | 2 |
2 | CSiS | 1,3 |
3 | FSiS | 2 |
4 | SiSh | 5 |
5 | MS | 4,6 |
6 | WS | 5,7 |
7 | D | 6,8 |
8 | PS | 6,7,9 |
9 | BS | 7,8 |
In [1]:
%%sh
pip install pandas
pip install scikit-learn
pip install keras
pip install sklearn
In [2]:
from __future__ import print_function
import time
import numpy as np
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from keras.preprocessing import sequence
from keras.models import Model, Sequential
from keras.constraints import maxnorm, nonneg
from keras.optimizers import SGD, Adam, Adamax, Nadam
from keras.regularizers import l2, activity_l2
from keras.layers import Input, Dense, Dropout, Activation, Convolution1D, Cropping1D, Cropping2D, Permute, Flatten, MaxPooling1D, merge
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold , StratifiedKFold
from classification_utilities import display_cm, display_adj_cm
from sklearn.metrics import confusion_matrix, f1_score
from sklearn import preprocessing
from sklearn.model_selection import GridSearchCV
In [3]:
data = pd.read_csv('train_test_data.csv')
# Set 'Well Name' and 'Formation' fields as categories
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
def coding(col, codeDict):
colCoded = pd.Series(col, copy=True)
for key, value in codeDict.items():
colCoded.replace(key, value, inplace=True)
return colCoded
data['Formation_coded'] = coding(data['Formation'], {'A1 LM':1,'A1 SH':2,'B1 LM':3,'B1 SH':4,'B2 LM':5,'B2 SH':6,'B3 LM':7,'B3 SH':8,'B4 LM':9,'B4 SH':10,'B5 LM':11,'B5 SH':12,'C LM':13,'C SH':14})
formation = data['Formation_coded'].values[:,np.newaxis]
# Parameters
feature_names = ['Depth', 'GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
well_names_test = ['SHRIMPLIN', 'ALEXANDER D', 'SHANKLE', 'LUKE G U', 'KIMZEY A', 'CROSS H CATTLE', 'NOLAN', 'Recruit F9', 'NEWBY', 'CHURCHMAN BIBLE']
well_names_validate = ['STUART', 'CRAWFORD']
data_vectors = data[feature_names].values
correct_facies_labels = data['Facies'].values
nm_m = data['NM_M'].values
nm_m_dist = np.zeros((nm_m.shape[0],1), dtype=int)
for i in range(nm_m.shape[0]):
count=1
while (i+count<nm_m.shape[0]-1 and nm_m[i+count] == nm_m[i]):
count = count+1
nm_m_dist[i] = count
nm_m_dist.reshape(nm_m_dist.shape[0],1)
formation_dist = np.zeros((formation.shape[0],1), dtype=int)
for i in range(formation.shape[0]):
count=1
while (i+count<formation.shape[0]-1 and formation[i+count] == formation[i]):
count = count+1
formation_dist[i] = count
formation_dist.reshape(formation_dist.shape[0],1)
well_labels = data[['Well Name', 'Facies']].values
depth = data['Depth'].values
# Fill missing values and normalize for 'PE' field
imp = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(data_vectors)
data_vectors = imp.transform(data_vectors)
data_vectors = np.hstack([data_vectors, nm_m_dist, formation, formation_dist])
scaler = preprocessing.StandardScaler().fit(data_vectors)
scaled_features = scaler.transform(data_vectors)
data_out = np.hstack([well_labels, scaled_features])
Split data into training data and blind data, and output as Numpy arrays
In [4]:
def preprocess(data_out):
data = data_out
X = data[0:4149,0:13]
y = np.concatenate((data[0:4149,0].reshape(4149,1), np_utils.to_categorical(correct_facies_labels[0:4149]-1)), axis=1)
X_test = data[4149:,0:13]
return X, y, X_test
X_train_in, y_train, X_test_in = preprocess(data_out)
print(X_train_in.shape)
In [5]:
conv_domain = 11
# Reproducibility
np.random.seed(7)
# Load data
def expand_dims(input):
r = int((conv_domain-1)/2)
l = input.shape[0]
n_input_vars = input.shape[1]
output = np.zeros((l, conv_domain, n_input_vars))
for i in range(l):
for j in range(conv_domain):
for k in range(n_input_vars):
output[i,j,k] = input[min(i+j-r,l-1),k]
return output
X_train = np.empty((0,conv_domain,11), dtype=float)
X_test = np.empty((0,conv_domain,11), dtype=float)
y_select = np.empty((0,9), dtype=int)
well_names_train = ['SHRIMPLIN', 'ALEXANDER D', 'SHANKLE', 'LUKE G U', 'KIMZEY A', 'CROSS H CATTLE', 'NOLAN', 'NEWBY', 'CHURCHMAN BIBLE']
for wellId in well_names_train:
X_train_subset = X_train_in[X_train_in[:, 0] == wellId][:,2:13]
X_train_subset = expand_dims(X_train_subset)
X_train = np.concatenate((X_train,X_train_subset),axis=0)
y_select = np.concatenate((y_select, y_train[y_train[:, 0] == wellId][:,1:11]), axis=0)
for wellId in well_names_validate:
X_test_subset = X_test_in[X_test_in[:, 0] == wellId][:,2:13]
X_test_subset = expand_dims(X_test_subset)
X_test = np.concatenate((X_test,X_test_subset),axis=0)
y_train = y_select
print(X_train.shape)
print(X_test.shape)
print(y_select.shape)
In [13]:
# Set parameters
input_dim = 11
output_dim = 9
n_per_batch = 128
epochs = 100
crop_factor = int(conv_domain/2)
filters_per_log = 7
n_convolutions = input_dim*filters_per_log
starting_weights = [np.zeros((conv_domain, 1, input_dim, n_convolutions)), np.ones((n_convolutions))]
norm_factor=float(conv_domain)*2.0
for i in range(input_dim):
for j in range(conv_domain):
starting_weights[0][j, 0, i, i*filters_per_log+0] = j/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+1] = j/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+2] = (conv_domain-j)/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+3] = (conv_domain-j)/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+4] = (2*abs(crop_factor-j))/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+5] = (conv_domain-2*abs(crop_factor-j))/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+6] = 0.25
def dnn_model(init_dropout_rate=0.5, main_dropout_rate=0.5,
hidden_dim_1=24, hidden_dim_2=36,
max_norm=10, nb_conv=n_convolutions):
# Define the model
inputs = Input(shape=(conv_domain,input_dim,))
inputs_dropout = Dropout(init_dropout_rate)(inputs)
x1 = Convolution1D(nb_conv, conv_domain, border_mode='valid', weights=starting_weights, activation='tanh', input_shape=(conv_domain,input_dim), input_length=input_dim, W_constraint=nonneg())(inputs_dropout)
x1 = Flatten()(x1)
xn = Cropping1D(cropping=(crop_factor,crop_factor))(inputs_dropout)
xn = Flatten()(xn)
xA = merge([x1, xn], mode='concat')
xA = Dropout(main_dropout_rate)(xA)
xA = Dense(hidden_dim_1, init='uniform', activation='relu', W_constraint=maxnorm(max_norm))(xA)
x = merge([xA, xn], mode='concat')
x = Dropout(main_dropout_rate)(x)
x = Dense(hidden_dim_2, init='uniform', activation='relu', W_constraint=maxnorm(max_norm))(x)
predictions = Dense(output_dim, init='uniform', activation='softmax')(x)
model = Model(input=inputs, output=predictions)
optimizerNadam = Nadam(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=1e-08, schedule_decay=0.004)
model.compile(loss='categorical_crossentropy', optimizer=optimizerNadam, metrics=['accuracy'])
return model
# Load the model
t0 = time.time()
model_dnn = dnn_model()
model_dnn.summary()
t1 = time.time()
print("Load time = %d" % (t1-t0) )
def plot_weights(n_convs_disp=input_dim):
layerID=2
print(model_dnn.layers[layerID].get_weights()[0].shape)
print(model_dnn.layers[layerID].get_weights()[1].shape)
fig, ax = plt.subplots(figsize=(12,10))
for i in range(n_convs_disp):
plt.subplot(input_dim,1,i+1)
plt.imshow(model_dnn.layers[layerID].get_weights()[0][:,0,i,:], interpolation='none')
plt.show()
plot_weights(1)
In [14]:
#Train model
t0 = time.time()
model_dnn.fit(X_train, y_train, batch_size=n_per_batch, nb_epoch=epochs, verbose=0)
t1 = time.time()
print("Train time = %d seconds" % (t1-t0) )
# Predict Values on Training set
t0 = time.time()
y_predicted = model_dnn.predict( X_train , batch_size=n_per_batch, verbose=2)
t1 = time.time()
print("Test time = %d seconds" % (t1-t0) )
# Print Report
# Format output [0 - 8 ]
y_ = np.zeros((len(y_train),1))
for i in range(len(y_train)):
y_[i] = np.argmax(y_train[i])
y_predicted_ = np.zeros((len(y_predicted), 1))
for i in range(len(y_predicted)):
y_predicted_[i] = np.argmax( y_predicted[i] )
# Confusion Matrix
conf = confusion_matrix(y_, y_predicted_)
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
# Print Results
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("\nConfusion Matrix")
display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)
In [8]:
plot_weights()
In [9]:
# Cross Validation
def cross_validate():
t0 = time.time()
estimator = KerasClassifier(build_fn=dnn_model, nb_epoch=epochs, batch_size=n_per_batch, verbose=0)
skf = StratifiedKFold(n_splits=5, shuffle=True)
results_dnn = cross_val_score(estimator, X_train, y_train, cv= skf.get_n_splits(X_train, y_train))
t1 = time.time()
print("Cross Validation time = %d" % (t1-t0) )
print(' Cross Validation Results')
print( results_dnn )
print(np.mean(results_dnn))
cross_validate()
In [10]:
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
def make_facies_log_plot(logs, facies_colors, y_test=None, wellId=None):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
facies = np.zeros(2*(int(zbot-ztop)+1))
shift = 0
depth = ztop
for i in range(logs.Depth.count()-1):
while (depth < logs.Depth.values[i] + 0.25 and depth < zbot+0.25):
if (i<logs.Depth.count()-1):
new = logs['Facies'].values[i]
facies[shift] = new
depth += 0.5
shift += 1
facies = facies[0:facies.shape[0]-1]
cluster=np.repeat(np.expand_dims(facies,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=8, gridspec_kw={'width_ratios':[1,1,1,1,1,1,2,2]}, figsize=(10, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
ax[5].plot(logs.NM_M, logs.Depth, '-', color='black')
if (y_test is not None):
for i in range(9):
if (wellId == 'STUART'):
ax[6].plot(y_test[0:474,i], logs.Depth, color=facies_colors[i], lw=1.5)
else:
ax[6].plot(y_test[474:,i], logs.Depth, color=facies_colors[i], lw=1.5)
im=ax[7].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[7])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=5)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel("NM_M")
ax[5].set_xlim(logs.NM_M.min()-1.,logs.NM_M.max()+1.)
ax[6].set_xlabel("Facies Prob")
ax[6].set_xlim(0.0,1.0)
ax[7].set_xlabel('Facies')
ax[0].set_yticklabels([]);
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[6].set_xticklabels([]); ax[7].set_xticklabels([]);
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
In [15]:
# DNN model Prediction
y_test = model_dnn.predict( X_test , batch_size=n_per_batch, verbose=0)
predictions_dnn = np.zeros((len(y_test),1))
for i in range(len(y_test)):
predictions_dnn[i] = np.argmax(y_test[i]) + 1
predictions_dnn = predictions_dnn.astype(int)
# Store results
train_data = pd.read_csv('train_test_data.csv')
test_data = pd.read_csv('../validation_data_nofacies.csv')
test_data['Facies'] = predictions_dnn
test_data.to_csv('Prediction_StoDIG_5_CNN.csv')
for wellId in well_names_validate:
make_facies_log_plot( test_data[test_data['Well Name'] == wellId], facies_colors=facies_colors, y_test=y_test, wellId=wellId)
#for wellId in well_names_test:
# make_facies_log_plot( train_data[train_data['Well Name'] == wellId], facies_colors=facies_colors)