FDMS TME3

Kaggle How Much Did It Rain? II

Florian Toque & Paul Willot

Data Vize


In [1]:
# from __future__ import exam_success
from __future__ import absolute_import
from __future__ import print_function

%matplotlib inline
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import pandas as pd
import scipy.stats as stats

# Sk cheats
from sklearn.cross_validation import cross_val_score  # cross val
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.preprocessing import Imputer   # get rid of nan
  • 13.765.202 lines in train.csv
  • 8.022.757 lines in test.csv

Few words about the dataset

Predictions is made in the USA corn growing states (mainly Iowa, Illinois, Indiana) during the season with the highest rainfall (as illustrated by Iowa for the april to august months)

The Kaggle page indicate that the dataset have been shuffled, so working on a subset seems acceptable
The test set is not a extracted from the same data as the training set however, which make the evaluation trickier

Load the dataset


In [2]:
%%time
filename = "data/reduced_train_100000.csv"
#filename = "data/reduced_train_100000.csv"
raw = pd.read_csv(filename)
raw = raw.set_index('Id')


CPU times: user 328 ms, sys: 64.8 ms, total: 392 ms
Wall time: 396 ms

In [3]:
raw['Expected'].describe()


Out[3]:
count    100000.000000
mean        129.579825
std         687.622542
min           0.010000
25%           0.254000
50%           1.016000
75%           3.556002
max       32740.617000
Name: Expected, dtype: float64

Per wikipedia, a value of more than 421 mm/h is considered "Extreme/large hail"
If we encounter the value 327.40 meter per hour, we should probably start building Noah's ark
Therefor, it seems reasonable to drop values too large, considered as outliers


In [4]:
# Considering that the gauge may concentrate the rainfall, we set the cap to 1000
# Comment this line to analyse the complete dataset 
l = len(raw)
raw = raw[raw['Expected'] < 1000]
print("Dropped %d (%0.2f%%)"%(l-len(raw),(l-len(raw))/float(l)*100))


Dropped 4175 (4.17%)

In [5]:
raw.head(5)


Out[5]:
minutes_past radardist_km Ref Ref_5x5_10th Ref_5x5_50th Ref_5x5_90th RefComposite RefComposite_5x5_10th RefComposite_5x5_50th RefComposite_5x5_90th ... RhoHV_5x5_90th Zdr Zdr_5x5_10th Zdr_5x5_50th Zdr_5x5_90th Kdp Kdp_5x5_10th Kdp_5x5_50th Kdp_5x5_90th Expected
Id
1 3 10 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.254
1 16 10 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.254
1 25 10 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.254
1 35 10 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.254
1 45 10 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.254

5 rows × 23 columns


In [6]:
raw.describe()


Out[6]:
minutes_past radardist_km Ref Ref_5x5_10th Ref_5x5_50th Ref_5x5_90th RefComposite RefComposite_5x5_10th RefComposite_5x5_50th RefComposite_5x5_90th ... RhoHV_5x5_90th Zdr Zdr_5x5_10th Zdr_5x5_50th Zdr_5x5_90th Kdp Kdp_5x5_10th Kdp_5x5_50th Kdp_5x5_90th Expected
count 95825.000000 95825.000000 44990.000000 38441.000000 45146.000000 53163.000000 48020.000000 42275.000000 48117.000000 55260.000000 ... 42210.000000 36040.000000 30901.000000 36025.000000 42210.000000 31318.000000 26464.000000 31368.000000 36627.000000 95825.000000
mean 29.686595 11.052523 23.684530 20.789496 23.378716 26.419606 25.422845 22.955328 25.136573 27.966965 ... 1.014748 0.597368 -0.565218 0.428472 2.017049 -0.011121 -3.384371 -0.431668 3.854969 17.433421
std 17.417786 4.239598 10.227057 9.074095 9.939687 11.193085 10.630984 9.639113 10.377139 11.544993 ... 0.045614 1.387422 0.973801 0.865122 1.539430 3.747653 2.772211 2.202447 3.761289 84.809357
min 0.000000 0.000000 -29.000000 -31.500000 -31.500000 -26.500000 -26.500000 -27.500000 -25.000000 -23.000000 ... 0.208333 -7.875000 -7.875000 -7.875000 -7.875000 -52.880005 -51.420000 -46.870010 -41.540010 0.010000
25% 15.000000 9.000000 17.500000 15.500000 17.500000 19.500000 19.000000 17.500000 18.500000 20.500000 ... 0.998333 -0.062500 -1.000000 0.062500 1.125000 -1.410004 -4.230011 -0.710007 1.759994 0.254000
50% 30.000000 12.000000 24.000000 21.000000 23.500000 27.000000 25.500000 23.000000 25.500000 28.500000 ... 1.005000 0.500000 -0.500000 0.375000 1.687500 0.000000 -2.809998 0.000000 3.169998 1.016000
75% 45.000000 14.000000 30.500000 27.000000 30.500000 34.500000 33.000000 29.500000 32.500000 36.500000 ... 1.051667 1.125000 0.000000 0.750000 2.500000 1.409988 -1.740006 0.349991 5.289993 3.048001
max 59.000000 21.000000 64.500000 57.000000 61.500000 67.500000 68.000000 59.500000 64.000000 79.500000 ... 1.051667 7.937500 5.937500 7.937500 7.937500 47.849990 1.759994 5.629990 43.209990 876.300500

8 rows × 23 columns

Quick analysis for the sparsity by column


In [7]:
l = float(len(raw["minutes_past"]))
comp = [[1-raw[i].isnull().sum()/l , i] for i in raw.columns]
comp.sort(key=lambda x: x[0], reverse=True)

sns.barplot(zip(*comp)[0],zip(*comp)[1],palette=sns.cubehelix_palette(len(comp), start=.5, rot=-.75))
plt.title("Percentage of non NaN data")
plt.show()


We see that except for the fixed features minutes_past, radardist_km and Expected the dataset is mainly sparse.
Let's transform the dataset to conduct more analysis

We regroup the data by ID


In [8]:
# We select all features except for the minutes past,
# because we ignore the time repartition of the sequence for now

features_columns = list([u'Ref', u'Ref_5x5_10th',
       u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
       u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
       u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
       u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
       u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
       u'Kdp_5x5_50th', u'Kdp_5x5_90th'])

def getXy(raw):
    selected_columns = list([ u'radardist_km', u'Ref', u'Ref_5x5_10th',
       u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
       u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
       u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
       u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
       u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
       u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
    
    data = raw[selected_columns]
    
    docX, docY = [], []
    for i in data.index.unique():
        if isinstance(data.loc[i],pd.core.series.Series):
            m = [data.loc[i].as_matrix()]
            docX.append(m)
            docY.append(float(raw.loc[i]["Expected"]))
        else:
            m = data.loc[i].as_matrix()
            docX.append(m)
            docY.append(float(raw.loc[i][:1]["Expected"]))
    X , y = np.array(docX) , np.array(docY)
    return X,y

How much observations is there for each ID ?


In [9]:
%%time
X,y=getXy(raw)

tmp = []
for i in X:
    tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On complete dataset)")
plt.plot()

print("Average gauge observation in mm: %0.2f"%y.mean())


Average gauge observation in mm: 17.55
CPU times: user 6.07 s, sys: 46.5 ms, total: 6.12 s
Wall time: 6.16 s

We see there is a lot of ID with 6 or 12 observations, that mean one every 5 or 10 minutes on average.


In [10]:
pd.DataFrame(y).describe()


Out[10]:
0
count 9100.000000
mean 17.547618
std 82.682028
min 0.010000
25% 0.254000
50% 0.762000
75% 2.794001
max 876.300500

Now let's do the analysis on different subsets:

On fully filled dataset


In [11]:
#noAnyNan = raw.loc[raw[features_columns].dropna(how='any').index.unique()]
noAnyNan = raw.dropna()

In [12]:
noAnyNan.isnull().sum()


Out[12]:
minutes_past             0
radardist_km             0
Ref                      0
Ref_5x5_10th             0
Ref_5x5_50th             0
Ref_5x5_90th             0
RefComposite             0
RefComposite_5x5_10th    0
RefComposite_5x5_50th    0
RefComposite_5x5_90th    0
RhoHV                    0
RhoHV_5x5_10th           0
RhoHV_5x5_50th           0
RhoHV_5x5_90th           0
Zdr                      0
Zdr_5x5_10th             0
Zdr_5x5_50th             0
Zdr_5x5_90th             0
Kdp                      0
Kdp_5x5_10th             0
Kdp_5x5_50th             0
Kdp_5x5_90th             0
Expected                 0
dtype: int64

In [13]:
X,y=getXy(noAnyNan)

tmp = []
for i in X:
    tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On fully filled dataset)")
plt.plot()

print("Average gauge observation in mm: %0.2f"%y.mean())


Average gauge observation in mm: 4.80

In [14]:
pd.DataFrame(y).describe()


Out[14]:
0
count 3093.000000
mean 4.804508
std 23.151911
min 0.010000
25% 0.508000
50% 1.778001
75% 3.810002
max 675.999400

In [15]:
noFullNan = raw.loc[raw[features_columns].dropna(how='all').index.unique()]

In [16]:
noFullNan.isnull().sum()


Out[16]:
minutes_past                 0
radardist_km                 0
Ref                      25775
Ref_5x5_10th             32324
Ref_5x5_50th             25619
Ref_5x5_90th             17602
RefComposite             22745
RefComposite_5x5_10th    28490
RefComposite_5x5_50th    22648
RefComposite_5x5_90th    15505
RhoHV                    34725
RhoHV_5x5_10th           39864
RhoHV_5x5_50th           34740
RhoHV_5x5_90th           28555
Zdr                      34725
Zdr_5x5_10th             39864
Zdr_5x5_50th             34740
Zdr_5x5_90th             28555
Kdp                      39447
Kdp_5x5_10th             44301
Kdp_5x5_50th             39397
Kdp_5x5_90th             34138
Expected                     0
dtype: int64

In [17]:
X,y=getXy(noFullNan)

tmp = []
for i in X:
    tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On partly filled dataset)")
plt.plot()

print("Average gauge observation in mm: %0.2f"%y.mean())


Average gauge observation in mm: 9.65

In [18]:
pd.DataFrame(y).describe()


Out[18]:
0
count 6058.000000
mean 9.650996
std 57.325040
min 0.010000
25% 0.254000
50% 1.270001
75% 3.302002
max 876.300500

In [19]:
fullNan = raw.drop(raw[features_columns].dropna(how='all').index)

In [20]:
fullNan.isnull().sum()


Out[20]:
minutes_past                 0
radardist_km                 0
Ref                      25060
Ref_5x5_10th             25060
Ref_5x5_50th             25060
Ref_5x5_90th             25060
RefComposite             25060
RefComposite_5x5_10th    25060
RefComposite_5x5_50th    25060
RefComposite_5x5_90th    25060
RhoHV                    25060
RhoHV_5x5_10th           25060
RhoHV_5x5_50th           25060
RhoHV_5x5_90th           25060
Zdr                      25060
Zdr_5x5_10th             25060
Zdr_5x5_50th             25060
Zdr_5x5_90th             25060
Kdp                      25060
Kdp_5x5_10th             25060
Kdp_5x5_50th             25060
Kdp_5x5_90th             25060
Expected                     0
dtype: int64

In [21]:
X,y=getXy(fullNan)

tmp = []
for i in X:
    tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On fully empty dataset)")
plt.plot()

print("Average gauge observation in mm: %0.2f"%y.mean())


Average gauge observation in mm: 33.27

In [22]:
pd.DataFrame(y).describe()


Out[22]:
0
count 3042.000000
mean 33.273368
std 116.353313
min 0.010000
25% 0.254000
50% 0.445000
75% 1.016000
max 774.700440

Strangely we notice that the less observations there is, the more it rains on average
However more of the expected rainfall fall below 0.5
What prediction should we make if there is no data?


In [23]:
print("%d observations" %(len(raw)))
#print("%d fully filled, %d partly filled, %d fully empty"
#      %(len(noAnyNan),len(noFullNan),len(raw)-len(noFullNan)))
print("%0.1f%% fully filled, %0.1f%% partly filled, %0.1f%% fully empty"
      %(len(noAnyNan)/float(len(raw))*100,
        len(noFullNan)/float(len(raw))*100,
        (len(raw)-len(noFullNan))/float(len(raw))*100))


95825 observations
23.1% fully filled, 73.8% partly filled, 26.2% fully empty

In [32]:
raw[raw["Expected"] < 1]["Expected"].hist(bins=10)


Out[32]:
<matplotlib.axes.AxesSubplot at 0x10e53c750>

In [ ]:


Predicitons

As a first try, we make predictions on the complete data, and return the 50th percentile and uncomplete and fully empty data


In [85]:
etreg = ExtraTreesRegressor(n_estimators=100, max_depth=None, min_samples_split=1, random_state=0)

In [89]:
%%time
X,y=getXy(noAnyNan)


CPU times: user 1.81 s, sys: 14 ms, total: 1.83 s
Wall time: 1.83 s

In [108]:
%%time
#XX = [np.array(t).mean(0) for t in X]
XX = [np.append(np.nanmean(np.array(np.array(t)),0),(np.array(t)[1:] - np.array(t)[:-1]).sum(0) ) for t in X]


CPU times: user 211 ms, sys: 2.5 ms, total: 214 ms
Wall time: 214 ms

In [99]:
XX[0]


Out[99]:
array([  2.        ,  14.38888889,  12.16666667,  15.55555556,
        19.22222222,  20.83333333,  18.66666667,  21.38888889,
        24.44444444,   0.99796296,   0.99462964,   0.99833333,
         0.99907408,   0.42361111,   0.16666667,   0.45138889,
         0.79861111,  -0.2344496 ,  -1.44889326,  -0.31222703,   1.05888367])
(X[0][1:] - X[0][:-1]).sum(0)
(t[1:] - t[:-1]).sum(0)

In [93]:
%%time
etreg = etreg.fit(XX,y)


CPU times: user 1.1 s, sys: 19 ms, total: 1.12 s
Wall time: 1.12 s

In [112]:
split = 0.2
ps = int(len(XX) * (1-split))
X_train = XX[:ps]
y_train = y[:ps]
X_test = XX[ps:]
y_test = y[ps:]

In [116]:
from sklearn import svm

In [117]:
svr = svm.SVR()

In [118]:
svr.fit(X_train,y_train)


Out[118]:
SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.0,
  kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)

In [119]:
%%time
svr_score = cross_val_score(svr, XX, y, cv=5)
print("Score: %s\tMean: %.03f"%(svr_score,svr_score.mean()))


Score: [ 0.00205307 -0.00414138 -0.00099299 -0.00541524 -0.00852066]	Mean: -0.003
CPU times: user 3.68 s, sys: 21.8 ms, total: 3.7 s
Wall time: 3.71 s

In [120]:
err = (svr.predict(X_test)-y_test)**2
err.sum()/len(err)


Out[120]:
214.8214198287813

In [113]:
%%time
etreg.fit(X_train,y_train)


CPU times: user 1.74 s, sys: 19.9 ms, total: 1.76 s
Wall time: 1.77 s
Out[113]:
ExtraTreesRegressor(bootstrap=False, criterion='mse', max_depth=None,
          max_features='auto', max_leaf_nodes=None, min_samples_leaf=1,
          min_samples_split=1, min_weight_fraction_leaf=0.0,
          n_estimators=100, n_jobs=1, oob_score=False, random_state=0,
          verbose=0, warm_start=False)

In [96]:
%%time
et_score = cross_val_score(etreg, XX, y, cv=5)
print("Score: %s\tMean: %.03f"%(et_score,et_score.mean()))


Score: [ 0.51407557  0.15938088  0.69138431  0.09297001  0.00505057]	Mean: 0.293
CPU times: user 4.46 s, sys: 70.5 ms, total: 4.54 s
Wall time: 4.55 s

In [114]:
%%time
et_score = cross_val_score(etreg, XX, y, cv=5)
print("Score: %s\tMean: %.03f"%(et_score,et_score.mean()))


Score: [ 0.53941779  0.1639901   0.68162502  0.12919835  0.00245396]	Mean: 0.303
CPU times: user 8.64 s, sys: 93.9 ms, total: 8.73 s
Wall time: 8.76 s

In [97]:
err = (etreg.predict(X_test)-y_test)**2
err.sum()/len(err)


Out[97]:
216.45597252585003

In [115]:
err = (etreg.predict(X_test)-y_test)**2
err.sum()/len(err)


Out[115]:
207.96284975354561

In [41]:
r = random.randrange(len(X_train))
print(r)
print(etreg.predict(X_train[r]))
print(y_train[r])


1767
[ 4.8260026]
4.8260026

In [61]:
r = random.randrange(len(X_test))
print(r)
print(etreg.predict(X_test[r]))
print(y_test[r])


173
[ 5.98220317]
10.668006


In [13]:
%%time
#filename = "data/reduced_test_5000.csv"
filename = "data/test.csv"
test = pd.read_csv(filename)
test = test.set_index('Id')


CPU times: user 20.9 s, sys: 5.06 s, total: 25.9 s
Wall time: 26.8 s

In [14]:
features_columns = list([u'Ref', u'Ref_5x5_10th',
       u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
       u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
       u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
       u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
       u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
       u'Kdp_5x5_50th', u'Kdp_5x5_90th'])

def getX(raw):
    selected_columns = list([ u'radardist_km', u'Ref', u'Ref_5x5_10th',
       u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
       u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
       u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
       u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
       u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
       u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
    
    data = raw[selected_columns]
    
    docX= []
    for i in data.index.unique():
        if isinstance(data.loc[i],pd.core.series.Series):
            m = [data.loc[i].as_matrix()]
            docX.append(m)
        else:
            m = data.loc[i].as_matrix()
            docX.append(m)
    X = np.array(docX)
    return X

In [74]:
X=getX(test)

tmp = []
for i in X:
    tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On test dataset)")
plt.plot()


Out[74]:
[]

In [15]:
testFull = test.dropna()

In [16]:
%%time
X=getX(testFull)  # 1min
XX = [np.array(t).mean(0) for t in X]  # 10s


CPU times: user 1min 12s, sys: 860 ms, total: 1min 13s
Wall time: 1min 14s

In [17]:
pd.DataFrame(etreg.predict(XX)).describe()


Out[17]:
0
count 235515.000000
mean 6.904250
std 7.382986
min 0.229492
25% 2.670935
50% 4.726779
75% 8.646297
max 344.055538

In [18]:
predFull = zip(testFull.index.unique(),etreg.predict(XX))

In [36]:
testNan = test.drop(test[features_columns].dropna(how='all').index)

In [37]:
tmp = np.empty(len(testNan))
tmp.fill(0.445000)   # 50th percentile of full Nan dataset
predNan = zip(testNan.index.unique(),tmp)

In [39]:
testLeft = test.drop(testNan.index.unique()).drop(testFull.index.unique())

In [40]:
tmp = np.empty(len(testLeft))
tmp.fill(1.27)   # 50th percentile of full Nan dataset
predLeft = zip(testLeft.index.unique(),tmp)

In [41]:
len(testFull.index.unique())


Out[41]:
235515

In [42]:
len(testNan.index.unique())


Out[42]:
232148

In [43]:
len(testLeft.index.unique())


Out[43]:
249962

In [44]:
pred = predFull + predNan + predLeft

In [45]:
pred.sort(key=lambda x: x[0], reverse=False)

In [46]:
submission = pd.DataFrame(pred)
submission.columns = ["Id","Expected"]
submission.head()


Out[46]:
Id Expected
0 1 1.270000
1 2 1.270000
2 3 8.287113
3 4 9.608633
4 5 0.445000

In [47]:
submission.to_csv("first_submit.csv",index=False)

In [ ]:


In [34]:
filename = "data/sample_solution.csv"
sol = pd.read_csv(filename)

In [28]:
ss = np.array(sol)

In [30]:
%%time
for a,b in predFull:
    ss[a-1][1]=b


CPU times: user 361 ms, sys: 28.7 ms, total: 390 ms
Wall time: 369 ms

In [41]:
ss


Out[41]:
array([[  1.00000000e+00,   8.57645300e-02],
       [  2.00000000e+00,   1.20000000e+01],
       [  3.00000000e+00,   8.28711271e+00],
       ..., 
       [  7.17623000e+05,   9.65720687e+00],
       [  7.17624000e+05,   1.29702251e+00],
       [  7.17625000e+05,   0.00000000e+00]])

In [45]:
sub = pd.DataFrame(ss)
sub.columns = ["Id","Expected"]
sub.Id = sub.Id.astype(int)
sub.head()


Out[45]:
Id Expected
0 1 0.085765
1 2 12.000000
2 3 8.287113
3 4 9.608633
4 5 0.000000

In [46]:
sub.to_csv("submit_2.csv",index=False)

In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]: