https://medium.com/@pushkarmandot/build-your-first-deep-learning-neural-network-model-using-keras-in-python-a90b5864116d

Data and Business Problem:

Our basic aim is to predict customer churn for a certain bank i.e. which customer is going to leave this bank service.


In [1]:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

dataset = pd.read_csv('Churn_Modelling.csv')

dataset.head()


Out[1]:
RowNumber CustomerId Surname CreditScore Geography Gender Age Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Exited
0 1 15634602 Hargrave 619 France Female 42 2 0.00 1 1 1 101348.88 1
1 2 15647311 Hill 608 Spain Female 41 1 83807.86 1 0 1 112542.58 0
2 3 15619304 Onio 502 France Female 42 8 159660.80 3 1 0 113931.57 1
3 4 15701354 Boni 699 France Female 39 1 0.00 2 0 0 93826.63 0
4 5 15737888 Mitchell 850 Spain Female 43 2 125510.82 1 1 1 79084.10 0

Create matrix of features and matrix of target variable. In this case we are excluding column 1, 2 & 3 as those are ‘row_number’, ‘customerid’ & 'Surname' which are not useful in our analysis. Column 14, ‘Exited’ is our Target Variable


In [2]:
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
## Read this for categorical Encoding : http://pbpython.com/categorical-encoding.html
##pd.get_dummies(dataset, columns=["Geography", "Gender"], prefix=["Geography", "Gender"]).head()

In [3]:
def getXy_1(dataset,target):
    df = pd.get_dummies(dataset, columns=["Geography", "Gender"], prefix=["Geography", "Gender"])
    
    y = df[target]
    X = df.loc[:, df.columns != target]
    return X,y


def getXy_2(dataset,target):    
    lb = LabelEncoder()
    dataset['Gender'] = lb.fit_transform(dataset['Gender'])
    dataset['Geography'] = lb.fit_transform(dataset['Geography'])
    ## One-Hot Coding
    dataset = pd.get_dummies(dataset, columns = ['Geography','Gender'])
    y = dataset[target]
    X = dataset.loc[:, dataset.columns != target]
    return X,y

In [4]:
X,y = getXy_2(dataset,target='Exited')
print("X.columns:",X.columns)
X = X.iloc[:, 3:].values
y = y.values
y


X.columns: Index(['RowNumber', 'CustomerId', 'Surname', 'CreditScore', 'Age', 'Tenure',
       'Balance', 'NumOfProducts', 'HasCrCard', 'IsActiveMember',
       'EstimatedSalary', 'Geography_0', 'Geography_1', 'Geography_2',
       'Gender_0', 'Gender_1'],
      dtype='object')
Out[4]:
array([1, 0, 1, ..., 1, 1, 0], dtype=int64)

In [5]:
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train,X_test, y_train, y_test = train_test_split(X,y,test_size=0.2)

I know you are tired of data preprocessing but I promise this is the last step. If you carefully observe data, you will find that data is not scaled properly. Some variable has value in thousands while some have value is tens or ones. We don’t want any of our variable to dominate on other so let’s go and scale data.

‘StandardScaler’ is available in ScikitLearn. In the following code we are fitting and transforming StandardScaler method on train data. We have to standardize our scaling so we will use the same fitted method to transform/scale test data.


In [6]:
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.fit_transform(X_test)

In [7]:
X_train


Out[7]:
array([[ 0.4034289 ,  0.85614078, -1.03419226, ...,  1.73205081,
         1.09196221, -1.09196221],
       [ 1.21234527, -1.04800752, -0.6883953 , ..., -0.57735027,
         1.09196221, -1.09196221],
       [ 0.33083384, -0.57197045,  1.38638647, ..., -0.57735027,
         1.09196221, -1.09196221],
       ..., 
       [ 0.78714564, -0.38155562,  0.00319862, ..., -0.57735027,
        -0.91578261,  0.91578261],
       [-0.85142856,  0.3801037 ,  1.04058951, ...,  1.73205081,
        -0.91578261,  0.91578261],
       [-0.24992664, -0.38155562,  0.69479255, ...,  1.73205081,
         1.09196221, -1.09196221]])

In [8]:
X_train.shape


Out[8]:
(8000, 13)

In [9]:
import keras
from keras.models import Sequential

from keras.layers.core import Dense,Dropout, Dropout,Activation


Using TensorFlow backend.

In [10]:
model = Sequential()

In [11]:
model.add(Dense(16,input_dim=13))
model.add(Activation('relu'))
model.add(Dropout(0.2))

model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dropout(0.2))

model.add(Dense(8))
model.add(Activation('relu'))
model.add(Dropout(0.2))

model.add(Dense(1))
model.add(Activation('sigmoid'))

In [12]:
model.summary()


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 16)                224       
_________________________________________________________________
activation_1 (Activation)    (None, 16)                0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 16)                0         
_________________________________________________________________
dense_2 (Dense)              (None, 16)                272       
_________________________________________________________________
activation_2 (Activation)    (None, 16)                0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 16)                0         
_________________________________________________________________
dense_3 (Dense)              (None, 8)                 136       
_________________________________________________________________
activation_3 (Activation)    (None, 8)                 0         
_________________________________________________________________
dropout_3 (Dropout)          (None, 8)                 0         
_________________________________________________________________
dense_4 (Dense)              (None, 1)                 9         
_________________________________________________________________
activation_4 (Activation)    (None, 1)                 0         
=================================================================
Total params: 641
Trainable params: 641
Non-trainable params: 0
_________________________________________________________________

In [13]:
# Compiling Neural Network
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

In [14]:
# Fitting our model 
model.fit(X_train, y_train, batch_size = 10, nb_epoch = 100)


C:\Users\prassha\AppData\Local\Continuum\Anaconda3\lib\site-packages\keras\models.py:844: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
  warnings.warn('The `nb_epoch` argument in `fit` '
Epoch 1/100
8000/8000 [==============================] - 1s - loss: 0.5072 - acc: 0.7852     
Epoch 2/100
8000/8000 [==============================] - 1s - loss: 0.4530 - acc: 0.8054     - ETA: 0s - loss: 0.46
Epoch 3/100
8000/8000 [==============================] - 1s - loss: 0.4448 - acc: 0.8094     - ETA: 0s - loss: 0.4504 - acc:
Epoch 4/100
8000/8000 [==============================] - 1s - loss: 0.4370 - acc: 0.8106     
Epoch 5/100
8000/8000 [==============================] - 1s - loss: 0.4254 - acc: 0.8161     
Epoch 6/100
8000/8000 [==============================] - 1s - loss: 0.4220 - acc: 0.8200     - ETA: 0s - loss: 0.4213 - acc: 0.8
Epoch 7/100
8000/8000 [==============================] - 1s - loss: 0.4067 - acc: 0.8296     
Epoch 8/100
8000/8000 [==============================] - 1s - loss: 0.4011 - acc: 0.8320     - ETA: 0s - loss: 0.4032 - acc: 0.83 - ETA: 0s - loss: 0.3954
Epoch 9/100
8000/8000 [==============================] - 1s - loss: 0.3975 - acc: 0.8350     
Epoch 10/100
8000/8000 [==============================] - 1s - loss: 0.3881 - acc: 0.8380     
Epoch 11/100
8000/8000 [==============================] - 1s - loss: 0.3847 - acc: 0.8395     
Epoch 12/100
8000/8000 [==============================] - 1s - loss: 0.3832 - acc: 0.8424     
Epoch 13/100
8000/8000 [==============================] - 1s - loss: 0.3801 - acc: 0.8425     - ETA: 0s - loss: 0.3893 -
Epoch 14/100
8000/8000 [==============================] - 1s - loss: 0.3779 - acc: 0.8439     
Epoch 15/100
8000/8000 [==============================] - 1s - loss: 0.3746 - acc: 0.8475     
Epoch 16/100
8000/8000 [==============================] - 1s - loss: 0.3777 - acc: 0.8462     
Epoch 17/100
8000/8000 [==============================] - 1s - loss: 0.3693 - acc: 0.8492     - ETA: 0s - loss: 0.3551 -
Epoch 18/100
8000/8000 [==============================] - 1s - loss: 0.3786 - acc: 0.8421     
Epoch 19/100
8000/8000 [==============================] - 1s - loss: 0.3712 - acc: 0.8476     
Epoch 20/100
8000/8000 [==============================] - 1s - loss: 0.3735 - acc: 0.8475     
Epoch 21/100
8000/8000 [==============================] - 1s - loss: 0.3721 - acc: 0.8475     
Epoch 22/100
8000/8000 [==============================] - 1s - loss: 0.3711 - acc: 0.8481     - ETA: 0s - loss: 0.36
Epoch 23/100
8000/8000 [==============================] - 1s - loss: 0.3730 - acc: 0.8475     
Epoch 24/100
8000/8000 [==============================] - 1s - loss: 0.3715 - acc: 0.8457     
Epoch 25/100
8000/8000 [==============================] - 1s - loss: 0.3699 - acc: 0.8460     
Epoch 26/100
8000/8000 [==============================] - 1s - loss: 0.3649 - acc: 0.8465     - ETA: 0s - loss: 0.3720 -
Epoch 27/100
8000/8000 [==============================] - 1s - loss: 0.3687 - acc: 0.8484     
Epoch 28/100
8000/8000 [==============================] - 1s - loss: 0.3669 - acc: 0.8474     
Epoch 29/100
8000/8000 [==============================] - 1s - loss: 0.3691 - acc: 0.8467     
Epoch 30/100
8000/8000 [==============================] - 1s - loss: 0.3686 - acc: 0.8486     
Epoch 31/100
8000/8000 [==============================] - 1s - loss: 0.3679 - acc: 0.8489     
Epoch 32/100
8000/8000 [==============================] - 1s - loss: 0.3672 - acc: 0.8492     - ETA: 0s - loss:
Epoch 33/100
8000/8000 [==============================] - 1s - loss: 0.3653 - acc: 0.8531     
Epoch 34/100
8000/8000 [==============================] - 1s - loss: 0.3634 - acc: 0.8500     
Epoch 35/100
8000/8000 [==============================] - 1s - loss: 0.3672 - acc: 0.8459     
Epoch 36/100
8000/8000 [==============================] - 1s - loss: 0.3636 - acc: 0.8490     
Epoch 37/100
8000/8000 [==============================] - 1s - loss: 0.3645 - acc: 0.8494     
Epoch 38/100
8000/8000 [==============================] - 1s - loss: 0.3635 - acc: 0.8502     
Epoch 39/100
8000/8000 [==============================] - 1s - loss: 0.3556 - acc: 0.8517     
Epoch 40/100
8000/8000 [==============================] - 1s - loss: 0.3628 - acc: 0.8496     
Epoch 41/100
8000/8000 [==============================] - 1s - loss: 0.3656 - acc: 0.8481     
Epoch 42/100
8000/8000 [==============================] - 1s - loss: 0.3619 - acc: 0.8474     
Epoch 43/100
8000/8000 [==============================] - 1s - loss: 0.3646 - acc: 0.8486     
Epoch 44/100
8000/8000 [==============================] - 1s - loss: 0.3610 - acc: 0.8524     
Epoch 45/100
8000/8000 [==============================] - 1s - loss: 0.3583 - acc: 0.8527     
Epoch 46/100
8000/8000 [==============================] - 1s - loss: 0.3630 - acc: 0.8487     
Epoch 47/100
8000/8000 [==============================] - 1s - loss: 0.3624 - acc: 0.8505     
Epoch 48/100
8000/8000 [==============================] - 1s - loss: 0.3603 - acc: 0.8531     - ETA: 0s - loss: 0.3605 - acc: 0.
Epoch 49/100
8000/8000 [==============================] - 1s - loss: 0.3592 - acc: 0.8516     
Epoch 50/100
8000/8000 [==============================] - 1s - loss: 0.3579 - acc: 0.8522     - ETA: 0s - loss: 0.3465 - ac - ETA: 0s - loss: 0.3494 - acc:
Epoch 51/100
8000/8000 [==============================] - 1s - loss: 0.3567 - acc: 0.8532     
Epoch 52/100
8000/8000 [==============================] - 1s - loss: 0.3579 - acc: 0.8530     
Epoch 53/100
8000/8000 [==============================] - 1s - loss: 0.3604 - acc: 0.8492     
Epoch 54/100
8000/8000 [==============================] - 1s - loss: 0.3570 - acc: 0.8556     
Epoch 55/100
8000/8000 [==============================] - 1s - loss: 0.3557 - acc: 0.8536     
Epoch 56/100
8000/8000 [==============================] - 1s - loss: 0.3594 - acc: 0.8537     
Epoch 57/100
8000/8000 [==============================] - 1s - loss: 0.3621 - acc: 0.8486     - ETA: 0s - loss: 0.3617 - acc
Epoch 58/100
8000/8000 [==============================] - 1s - loss: 0.3568 - acc: 0.8507     - ETA: 0s - loss: 0.3
Epoch 59/100
8000/8000 [==============================] - 1s - loss: 0.3609 - acc: 0.8490     
Epoch 60/100
8000/8000 [==============================] - 1s - loss: 0.3538 - acc: 0.8547     
Epoch 61/100
8000/8000 [==============================] - 1s - loss: 0.3588 - acc: 0.8495     
Epoch 62/100
8000/8000 [==============================] - 1s - loss: 0.3653 - acc: 0.8507     
Epoch 63/100
8000/8000 [==============================] - 1s - loss: 0.3601 - acc: 0.8497     
Epoch 64/100
8000/8000 [==============================] - 1s - loss: 0.3599 - acc: 0.8519     
Epoch 65/100
8000/8000 [==============================] - 1s - loss: 0.3598 - acc: 0.8485     - ETA: 0s - loss: 
Epoch 66/100
8000/8000 [==============================] - 1s - loss: 0.3595 - acc: 0.8495     
Epoch 67/100
8000/8000 [==============================] - 1s - loss: 0.3574 - acc: 0.8521     
Epoch 68/100
8000/8000 [==============================] - 1s - loss: 0.3597 - acc: 0.8494     
Epoch 69/100
8000/8000 [==============================] - 1s - loss: 0.3600 - acc: 0.8511     - ETA: 0s - loss: 0.3654 - 
Epoch 70/100
8000/8000 [==============================] - 1s - loss: 0.3532 - acc: 0.8527     - ETA: 1s - loss
Epoch 71/100
8000/8000 [==============================] - 1s - loss: 0.3603 - acc: 0.8515     
Epoch 72/100
8000/8000 [==============================] - 1s - loss: 0.3588 - acc: 0.8501     
Epoch 73/100
8000/8000 [==============================] - 1s - loss: 0.3552 - acc: 0.8545     
Epoch 74/100
8000/8000 [==============================] - 1s - loss: 0.3628 - acc: 0.8495     - ETA: 0s - loss: 0.3582 - acc:  - ETA: 0s - loss: 0.3639
Epoch 75/100
8000/8000 [==============================] - 1s - loss: 0.3532 - acc: 0.8521     
Epoch 76/100
8000/8000 [==============================] - 1s - loss: 0.3582 - acc: 0.8527     
Epoch 77/100
8000/8000 [==============================] - 1s - loss: 0.3589 - acc: 0.8512     - ETA: 0s - loss: 0.3535 - acc: 0.8
Epoch 78/100
8000/8000 [==============================] - 1s - loss: 0.3572 - acc: 0.8549     
Epoch 79/100
8000/8000 [==============================] - 1s - loss: 0.3580 - acc: 0.8515     
Epoch 80/100
8000/8000 [==============================] - 1s - loss: 0.3554 - acc: 0.8525     
Epoch 81/100
8000/8000 [==============================] - 1s - loss: 0.3574 - acc: 0.8522     
Epoch 82/100
8000/8000 [==============================] - 1s - loss: 0.3544 - acc: 0.8545     
Epoch 83/100
8000/8000 [==============================] - 1s - loss: 0.3556 - acc: 0.8539     
Epoch 84/100
8000/8000 [==============================] - 1s - loss: 0.3573 - acc: 0.8531     
Epoch 85/100
8000/8000 [==============================] - 1s - loss: 0.3594 - acc: 0.8516     
Epoch 86/100
8000/8000 [==============================] - 1s - loss: 0.3576 - acc: 0.8550     
Epoch 87/100
8000/8000 [==============================] - 1s - loss: 0.3557 - acc: 0.8516     
Epoch 88/100
8000/8000 [==============================] - 1s - loss: 0.3540 - acc: 0.8551     
Epoch 89/100
8000/8000 [==============================] - 1s - loss: 0.3593 - acc: 0.8477     
Epoch 90/100
8000/8000 [==============================] - 1s - loss: 0.3544 - acc: 0.8499     
Epoch 91/100
8000/8000 [==============================] - 1s - loss: 0.3552 - acc: 0.8535     
Epoch 92/100
8000/8000 [==============================] - 1s - loss: 0.3544 - acc: 0.8522     
Epoch 93/100
8000/8000 [==============================] - 1s - loss: 0.3524 - acc: 0.8559     
Epoch 94/100
8000/8000 [==============================] - 1s - loss: 0.3555 - acc: 0.8510     
Epoch 95/100
8000/8000 [==============================] - 1s - loss: 0.3544 - acc: 0.8561     - ETA: 0s - loss: 0.3534 - acc: 0.857
Epoch 96/100
8000/8000 [==============================] - 1s - loss: 0.3565 - acc: 0.8511     
Epoch 97/100
8000/8000 [==============================] - 1s - loss: 0.3557 - acc: 0.8536     
Epoch 98/100
8000/8000 [==============================] - 1s - loss: 0.3591 - acc: 0.8530     - ETA: 0s - loss: 0.3603 - acc: 0.85
Epoch 99/100
8000/8000 [==============================] - 1s - loss: 0.3536 - acc: 0.8557     
Epoch 100/100
8000/8000 [==============================] - 1s - loss: 0.3592 - acc: 0.8525     
Out[14]:
<keras.callbacks.History at 0x2448633e4e0>

In [15]:
##Predicting the test set results
y_pred = model.predict(X_test)
y_pred = (y_pred>0.4)
y_pred


Out[15]:
array([[False],
       [False],
       [ True],
       ..., 
       [False],
       [False],
       [False]], dtype=bool)

In [16]:
# Creating the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)

In [17]:
cm


Out[17]:
array([[1497,  104],
       [ 172,  227]])

In [ ]: