FDMS TME3

Kaggle How Much Did It Rain? II

Florian Toque & Paul Willot

Notes

We tried different regressor model, like GBR, SVM, MLP, Random Forest and KNN as recommanded by the winning team of the Kaggle on taxi trajectories. So far GBR seems to be the best, slightly better than the RF.
The new features we exctracted only made a small impact on predictions but still improved them consistently.
We tried to use a LSTM to take advantage of the sequential structure of the data but it didn't work too well, probably because there is not enought data (13M lines divided per the average length of sequences (15), less the 30% of fully empty data)


In [4]:
# from __future__ import exam_success
from __future__ import absolute_import
from __future__ import print_function

# Standard imports
%matplotlib inline
import os
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import pandas as pd
import scipy.stats as stats

# Sk cheats
from sklearn.cross_validation import cross_val_score
from sklearn import grid_search
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.svm import SVR
#from sklearn.preprocessing import Imputer   # get rid of nan
from sklearn.decomposition import NMF   # to add features based on the latent representation
from sklearn.decomposition import ProjectedGradientNMF

# Faster gradient boosting
import xgboost as xgb

# For neural networks models
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD, RMSprop


Using Theano backend.
  • 13.765.202 lines in train.csv
  • 8.022.757 lines in test.csv

Few words about the dataset

Predictions is made in the USA corn growing states (mainly Iowa, Illinois, Indiana) during the season with the highest rainfall (as illustrated by Iowa for the april to august months)

The Kaggle page indicate that the dataset have been shuffled, so working on a subset seems acceptable
The test set is not a extracted from the same data as the training set however, which make the evaluation trickier

Load the dataset


In [5]:
%%time
#filename = "data/train.csv"
filename = "data/reduced_train_100000.csv"
#filename = "data/reduced_train_1000000.csv"
raw = pd.read_csv(filename)
raw = raw.set_index('Id')


CPU times: user 372 ms, sys: 37.8 ms, total: 410 ms
Wall time: 410 ms

In [6]:
raw.columns


Out[6]:
Index([u'minutes_past', u'radardist_km', u'Ref', u'Ref_5x5_10th',
       u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
       u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
       u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
       u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
       u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
       u'Kdp_5x5_50th', u'Kdp_5x5_90th', u'Expected'],
      dtype='object')

In [7]:
raw['Expected'].describe()


Out[7]:
count    100000.000000
mean        129.579825
std         687.622542
min           0.010000
25%           0.254000
50%           1.016000
75%           3.556002
max       32740.617000
Name: Expected, dtype: float64

Per wikipedia, a value of more than 421 mm/h is considered "Extreme/large hail"
If we encounter the value 327.40 meter per hour, we should probably start building Noah's ark
Therefor, it seems reasonable to drop values too large, considered as outliers


In [8]:
# Considering that the gauge may concentrate the rainfall, we set the cap to 1000
# Comment this line to analyse the complete dataset 
l = len(raw)
raw = raw[raw['Expected'] < 300]  #1000
print("Dropped %d (%0.2f%%)"%(l-len(raw),(l-len(raw))/float(l)*100))


Dropped 6241 (6.24%)

In [9]:
raw.head(5)


Out[9]:
minutes_past radardist_km Ref Ref_5x5_10th Ref_5x5_50th Ref_5x5_90th RefComposite RefComposite_5x5_10th RefComposite_5x5_50th RefComposite_5x5_90th ... RhoHV_5x5_90th Zdr Zdr_5x5_10th Zdr_5x5_50th Zdr_5x5_90th Kdp Kdp_5x5_10th Kdp_5x5_50th Kdp_5x5_90th Expected
Id
1 3 10 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.254
1 16 10 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.254
1 25 10 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.254
1 35 10 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.254
1 45 10 NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.254

5 rows × 23 columns


In [10]:
raw.describe()


Out[10]:
minutes_past radardist_km Ref Ref_5x5_10th Ref_5x5_50th Ref_5x5_90th RefComposite RefComposite_5x5_10th RefComposite_5x5_50th RefComposite_5x5_90th ... RhoHV_5x5_90th Zdr Zdr_5x5_10th Zdr_5x5_50th Zdr_5x5_90th Kdp Kdp_5x5_10th Kdp_5x5_50th Kdp_5x5_90th Expected
count 93759.000000 93759.000000 44923.000000 38391.000000 45078.000000 53038.000000 47946.000000 42219.000000 48042.000000 55118.000000 ... 42064.000000 35938.000000 30835.000000 35925.000000 42064.000000 31231.000000 26418.000000 31283.000000 36505.000000 93759.000000
mean 29.684830 11.022334 23.684482 20.788948 23.378688 26.427731 25.424874 22.956797 25.139201 27.982365 ... 1.014742 0.597837 -0.564851 0.429577 2.018197 -0.013098 -3.383198 -0.429909 3.855601 5.837679
std 17.418876 4.259865 10.224306 9.073503 9.936862 11.186952 10.627954 9.638466 10.372406 11.535892 ... 0.045336 1.388384 0.974288 0.864887 1.539513 3.747791 2.771442 2.194894 3.762005 22.764656
min 0.000000 0.000000 -29.000000 -31.500000 -31.500000 -26.500000 -26.500000 -27.500000 -25.000000 -23.000000 ... 0.208333 -7.875000 -7.875000 -7.875000 -7.875000 -52.880005 -51.420000 -46.870010 -41.540010 0.010000
25% 15.000000 9.000000 17.500000 15.500000 17.500000 19.500000 19.000000 17.500000 18.500000 20.500000 ... 0.998333 -0.062500 -1.000000 0.062500 1.125000 -1.410004 -4.230011 -0.710007 1.759994 0.254000
50% 30.000000 12.000000 24.000000 21.000000 23.500000 27.000000 25.500000 23.000000 25.500000 28.500000 ... 1.005000 0.500000 -0.500000 0.375000 1.687500 0.000000 -2.809998 0.000000 3.169998 0.830000
75% 45.000000 14.000000 30.500000 27.000000 30.500000 34.500000 33.000000 29.500000 32.500000 36.500000 ... 1.051667 1.125000 0.000000 0.750000 2.500000 1.409988 -1.740006 0.349991 5.289993 2.794001
max 59.000000 21.000000 64.500000 57.000000 61.500000 67.500000 68.000000 59.500000 64.000000 79.500000 ... 1.051667 7.937500 5.937500 7.937500 7.937500 47.849990 1.759994 5.629990 43.209990 244.000120

8 rows × 23 columns

We regroup the data by ID


In [11]:
# We select all features except for the minutes past,
# because we ignore the time repartition of the sequence for now

features_columns = list([u'Ref', u'Ref_5x5_10th',
       u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
       u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
       u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
       u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
       u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
       u'Kdp_5x5_50th', u'Kdp_5x5_90th'])

def getXy(raw):
    selected_columns = list([ u'minutes_past',u'radardist_km', u'Ref', u'Ref_5x5_10th',
       u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
       u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
       u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
       u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
       u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
       u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
    
    data = raw[selected_columns]
    
    docX, docY = [], []
    for i in data.index.unique():
        if isinstance(data.loc[i],pd.core.series.Series):
            m = [data.loc[i].as_matrix()]
            docX.append(m)
            docY.append(float(raw.loc[i]["Expected"]))
        else:
            m = data.loc[i].as_matrix()
            docX.append(m)
            docY.append(float(raw.loc[i][:1]["Expected"]))
    X , y = np.array(docX) , np.array(docY)
    return X,y

On fully filled dataset


In [12]:
#noAnyNan = raw.loc[raw[features_columns].dropna(how='any').index.unique()]
noAnyNan = raw.dropna()

In [13]:
noFullNan = raw.loc[raw[features_columns].dropna(how='all').index.unique()]

In [14]:
fullNan = raw.drop(raw[features_columns].dropna(how='all').index)

In [15]:
print(len(raw))
print(len(noAnyNan))
print(len(noFullNan))
print(len(fullNan))


93759
22158
70149
23610

Predicitons

As a first try, we make predictions on the complete data, and return the 50th percentile and uncomplete and fully empty data


In [16]:
%%time
#X,y=getXy(noAnyNan)
X,y=getXy(noFullNan)


CPU times: user 5.48 s, sys: 56.8 ms, total: 5.54 s
Wall time: 5.55 s

In [20]:
#%%time
#XX = [np.array(t).mean(0) for t in X]
#XX = np.array([np.append(np.nanmean(np.array(t),0),(np.array(t)[1:] - np.array(t)[:-1]).sum(0) ) for t in X])

In [18]:
XX=[]
for t in X:
    nm = np.nanmean(t,0)
    for idx,j in enumerate(nm):
        if np.isnan(j):
            nm[idx]=global_means[idx]
    XX.append(nm)
XX=np.array(XX)

# rescale to clip min at 0 (for non negative matrix factorization)
XX_rescaled=XX[:,:]-np.min(XX,0)

In [19]:
%%time
nn = ProjectedGradientNMF()
W = nn.fit_transform(XX_rescaled)
#H = nn.components_


/Library/Python/2.7/site-packages/sklearn/utils/__init__.py:75: DeprecationWarning: Class ProjectedGradientNMF is deprecated; It will be removed in release 0.19. Use NMF instead.'pg' solver is still available until release 0.19.
  warnings.warn(msg, category=DeprecationWarning)
/Library/Python/2.7/site-packages/sklearn/decomposition/nmf.py:775: DeprecationWarning: 'pg' solver will be removed in release 0.19. Use 'cd' solver instead.
  " Use 'cd' solver instead.", DeprecationWarning)
/Library/Python/2.7/site-packages/sklearn/decomposition/nmf.py:341: UserWarning: Iteration limit reached in nls subproblem.
  warnings.warn("Iteration limit reached in nls subproblem.")
CPU times: user 5.07 s, sys: 231 ms, total: 5.3 s
Wall time: 4.28 s

In [17]:
global_means = np.nanmean(noFullNan,0)

XX=[]
for t in X:
    nm = np.nanmean(t,0)
    for idx,j in enumerate(nm):
        if np.isnan(j):
            nm[idx]=global_means[idx]
    XX.append(nm)
XX=np.array(XX)

# rescale to clip min at 0 (for non negative matrix factorization)
XX_rescaled=XX[:,:]-np.min(XX,0)

nmf = NMF(max_iter=1000)
W = nmf.fit_transform(XX_rescaled)
#H = nn.components_


/Library/Python/2.7/site-packages/numpy/lib/nanfunctions.py:675: RuntimeWarning: Mean of empty slice
  warnings.warn("Mean of empty slice", RuntimeWarning)

In [21]:
# used to fill fully empty datas
global_means = np.nanmean(noFullNan,0)

# reduce the sequence structure of the data and produce
# new hopefully informatives features
def addFeatures(X,mf=0):
    # used to fill fully empty datas
    #global_means = np.nanmean(X,0)
    
    XX=[]
    nbFeatures=float(len(X[0][0]))
    for idxt,t in enumerate(X):
        
        # compute means, ignoring nan when possible, marking it when fully filled with nan
        nm = np.nanmean(t,0)
        tt=[]
        for idx,j in enumerate(nm):
            if np.isnan(j):
                nm[idx]=global_means[idx]
                tt.append(1)
            else:
                tt.append(0)
        tmp = np.append(nm,np.append(tt,tt.count(0)/nbFeatures))
        
        # faster if working on fully filled data:
        #tmp = np.append(np.nanmean(np.array(t),0),(np.array(t)[1:] - np.array(t)[:-1]).sum(0) )
        
        # add the percentiles
        tmp = np.append(tmp,np.nanpercentile(t,10,axis=0))
        tmp = np.append(tmp,np.nanpercentile(t,50,axis=0))
        tmp = np.append(tmp,np.nanpercentile(t,90,axis=0))
        
        for idx,i in enumerate(tmp):
            if np.isnan(i):
                tmp[idx]=0

        # adding the dbz as a feature
        test = t
        try:
            taa=test[:,0]
        except TypeError:
            taa=[test[0][0]]
        valid_time = np.zeros_like(taa)
        valid_time[0] = taa[0]
        for n in xrange(1,len(taa)):
            valid_time[n] = taa[n] - taa[n-1]
        valid_time[-1] = valid_time[-1] + 60 - np.sum(valid_time)
        valid_time = valid_time / 60.0


        sum=0
        try:
            column_ref=test[:,2]
        except TypeError:
            column_ref=[test[0][2]]
        for dbz, hours in zip(column_ref, valid_time):
            # See: https://en.wikipedia.org/wiki/DBZ_(meteorology)
            if np.isfinite(dbz):
                mmperhr = pow(pow(10, dbz/10)/200, 0.625)
                sum = sum + mmperhr * hours
        
        if not(mf is 0):
            tmp = np.append(tmp,mf[idxt])

        XX.append(np.append(np.array(sum),tmp))
        #XX.append(np.array([sum]))
        #XX.append(tmp)
    return XX

In [22]:
%time
XX=addFeatures(X,mf=W)
#XX=addFeatures(X)


CPU times: user 2 µs, sys: 4 µs, total: 6 µs
Wall time: 10 µs
/Library/Python/2.7/site-packages/numpy/lib/nanfunctions.py:993: RuntimeWarning: All-NaN slice encountered
  warnings.warn("All-NaN slice encountered", RuntimeWarning)

In [ ]:

(X[0][1:] - X[0][:-1]).sum(0) (t[1:] - t[:-1]).sum(0) %%time etreg = etreg.fit(XX,y)

In [23]:
def splitTrainTest(X, y, split=0.2):
    tmp1, tmp2 = [], []
    ps = int(len(X) * (1-split))
    index_shuf = range(len(X))
    random.shuffle(index_shuf)
    for i in index_shuf:
        tmp1.append(X[i])
        tmp2.append(y[i])
    return tmp1[:ps], tmp2[:ps], tmp1[ps:], tmp2[ps:]

In [24]:
X_train,y_train, X_test, y_test = splitTrainTest(XX,y)


In [27]:
def manualScorer(estimator, X, y):
    err = (estimator.predict(X_test)-y_test)**2
    return -err.sum()/len(err)

max prof 24 nb trees 84 min sample per leaf 17 min sample to split 51


In [36]:
svr = SVR(kernel='rbf', C=800.0)
parameters = {'C':range(500,900,50)} grid_svr = grid_search.GridSearchCV(svr, parameters,scoring=manualScorer) grid_svr.fit(X_train,y_train) print(grid_svr.grid_scores_) print("Best: ",grid_svr.best_params_)

In [37]:
%%time
srv = svr.fit(X_train,y_train)


CPU times: user 11.7 s, sys: 83.4 ms, total: 11.8 s
Wall time: 11.8 s

In [38]:
print(svr.score(X_train,y_train))
print(svr.score(X_test,y_test))


0.918792388154
0.611602236156

In [39]:
err = (svr.predict(X_train)-y_train)**2
err.sum()/len(err)


Out[39]:
24.378843410267176

In [40]:
err = (svr.predict(X_test)-y_test)**2
err.sum()/len(err)


Out[40]:
157.32833851945441

In [41]:
%%time
svr_score = cross_val_score(svr, XX, y, cv=5)
print("Score: %s\nMean: %.03f"%(svr_score,svr_score.mean()))


Score: [ 0.4658191   0.85429307  0.80191122  0.70893065  0.4809078 ]
Mean: 0.662
CPU times: user 58.7 s, sys: 233 ms, total: 58.9 s
Wall time: 59 s


In [42]:
knn = KNeighborsRegressor(n_neighbors=6,weights='distance',algorithm='ball_tree')

In [43]:
#parameters = {'weights':('distance','uniform'),'algorithm':('auto', 'ball_tree', 'kd_tree', 'brute')}
parameters = {'n_neighbors':range(1,10,1)}
grid_knn = grid_search.GridSearchCV(knn, parameters,scoring=manualScorer)

In [44]:
%%time
grid_knn.fit(X_train,y_train)


CPU times: user 11.5 s, sys: 22.8 ms, total: 11.6 s
Wall time: 11.6 s
Out[44]:
GridSearchCV(cv=None, error_score='raise',
       estimator=KNeighborsRegressor(algorithm='ball_tree', leaf_size=30, metric='minkowski',
          metric_params=None, n_jobs=1, n_neighbors=6, p=2,
          weights='distance'),
       fit_params={}, iid=True, n_jobs=1,
       param_grid={'n_neighbors': [1, 2, 3, 4, 5, 6, 7, 8, 9]},
       pre_dispatch='2*n_jobs', refit=True,
       scoring=<function manualScorer at 0x111def7d0>, verbose=0)

In [45]:
print(grid_knn.grid_scores_)
print("Best: ",grid_knn.best_params_)


[mean: -352.77795, std: 30.22895, params: {'n_neighbors': 1}, mean: -309.10945, std: 13.86237, params: {'n_neighbors': 2}, mean: -292.29459, std: 13.88150, params: {'n_neighbors': 3}, mean: -282.73488, std: 14.19093, params: {'n_neighbors': 4}, mean: -265.87640, std: 5.24433, params: {'n_neighbors': 5}, mean: -257.66251, std: 11.31830, params: {'n_neighbors': 6}, mean: -254.44185, std: 14.42411, params: {'n_neighbors': 7}, mean: -253.52680, std: 16.44452, params: {'n_neighbors': 8}, mean: -256.11824, std: 16.81184, params: {'n_neighbors': 9}]
Best:  {'n_neighbors': 8}

In [46]:
knn = grid_knn.best_estimator_

In [47]:
knn= knn.fit(X_train,y_train)

In [48]:
print(knn.score(X_train,y_train))
print(knn.score(X_test,y_test))


1.0
0.502520117937

In [49]:
err = (knn.predict(X_train)-y_train)**2
err.sum()/len(err)


Out[49]:
0.0

In [50]:
err = (knn.predict(X_test)-y_test)**2
err.sum()/len(err)


Out[50]:
201.51424796371862


In [59]:
etreg = ExtraTreesRegressor(n_estimators=200, max_depth=None, min_samples_split=1, random_state=0,n_jobs=4)

In [60]:
parameters = {'n_estimators':range(100,200,10)}
grid_rf = grid_search.GridSearchCV(etreg, parameters,n_jobs=4,scoring=manualScorer)

In [61]:
%%time
grid_rf.fit(X_train,y_train)


KeyboardInterrupt

In [ ]:
print(grid_rf.grid_scores_)
print("Best: ",grid_rf.best_params_)

In [ ]:
grid_rf.best_params_

In [ ]:
#etreg = grid_rf.best_estimator_

In [ ]:
%%time
etreg = etreg.fit(X_train,y_train)

In [ ]:
print(etreg.score(X_train,y_train))
print(etreg.score(X_test,y_test))

In [ ]:
err = (etreg.predict(X_train)-y_train)**2
err.sum()/len(err)

In [ ]:
err = (etreg.predict(X_test)-y_test)**2
err.sum()/len(err)


In [61]:
rfr = RandomForestRegressor(n_estimators=200, criterion='mse', max_depth=None, min_samples_split=2,
                            min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto',
                            max_leaf_nodes=None, bootstrap=True, oob_score=False, n_jobs=-1,
                            random_state=None, verbose=0, warm_start=False)

In [62]:
%%time
rfr = rfr.fit(X_train,y_train)


CPU times: user 2min 42s, sys: 319 ms, total: 2min 42s
Wall time: 21.7 s

In [63]:
print(rfr.score(X_train,y_train))
print(rfr.score(X_test,y_test))


0.934753945775
0.473645901607


In [64]:
# the dbz feature does not influence xgbr so much
xgbr = xgb.XGBRegressor(max_depth=6, learning_rate=0.1, n_estimators=700, silent=True,
                        objective='reg:linear', nthread=-1, gamma=0, min_child_weight=1,
                        max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1,
                        reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5,
                        seed=0, missing=None)

In [65]:
%%time
xgbr = xgbr.fit(X_train,y_train)


CPU times: user 52.3 s, sys: 44.4 ms, total: 52.3 s
Wall time: 52.3 s

In [66]:
# without the nmf features
# print(xgbr.score(X_train,y_train))
## 0.993948231144
# print(xgbr.score(X_test,y_test))
## 0.613931733332

In [67]:
# with nmf features
print(xgbr.score(X_train,y_train))
print(xgbr.score(X_test,y_test))


0.999788312407
0.541273583141


In [68]:
gbr = GradientBoostingRegressor(loss='ls', learning_rate=0.1, n_estimators=900,
                                subsample=1.0, min_samples_split=2, min_samples_leaf=1, max_depth=4, init=None,
                                random_state=None, max_features=None, alpha=0.5,
                                verbose=0, max_leaf_nodes=None, warm_start=False)

In [69]:
%%time
gbr = gbr.fit(X_train,y_train)
#os.system('say "終わりだ"') #its over!


CPU times: user 1min 13s, sys: 54.9 ms, total: 1min 13s
Wall time: 1min 13s

In [70]:
#parameters = {'max_depth':range(2,5,1),'alpha':[0.5,0.6,0.7,0.8,0.9]}
#parameters = {'subsample':[0.2,0.4,0.5,0.6,0.8,1]}
#parameters = {'subsample':[0.2,0.5,0.6,0.8,1],'n_estimators':[800,1000,1200]}
#parameters = {'max_depth':range(2,4,1)}
parameters = {'n_estimators':[400,800,1100]}
#parameters = {'loss':['ls', 'lad', 'huber', 'quantile'],'alpha':[0.3,0.5,0.8,0.9]}
#parameters = {'learning_rate':[0.1,0.5,0.9]}



grid_gbr = grid_search.GridSearchCV(gbr, parameters,n_jobs=2,scoring=manualScorer)

In [71]:
%%time
grid_gbr = grid_gbr.fit(X_train,y_train)


CPU times: user 1min 32s, sys: 609 ms, total: 1min 33s
Wall time: 4min 59s

In [72]:
print(grid_gbr.grid_scores_)
print("Best: ",grid_gbr.best_params_)


[mean: -259.71423, std: 24.14256, params: {'n_estimators': 400}, mean: -259.73451, std: 25.01831, params: {'n_estimators': 800}, mean: -259.10446, std: 25.17458, params: {'n_estimators': 1100}]
Best:  {'n_estimators': 1100}

In [73]:
print(gbr.score(X_train,y_train))
print(gbr.score(X_test,y_test))


0.997253431566
0.497976505158

In [74]:
err = (gbr.predict(X_train)-y_train)**2
print(err.sum()/len(err))
err = (gbr.predict(X_test)-y_test)**2
print(err.sum()/len(err))


0.82453061036
203.354729851

In [75]:
err = (gbr.predict(X_train)-y_train)**2
print(err.sum()/len(err))
err = (gbr.predict(X_test)-y_test)**2
print(err.sum()/len(err))


0.82453061036
203.354729851


In [76]:
t = []
for i in XX:
    t.append(np.count_nonzero(~np.isnan(i)) / float(i.size))
pd.DataFrame(np.array(t)).describe()


Out[76]:
0
count 6007
mean 1
std 0
min 1
25% 1
50% 1
75% 1
max 1


In [77]:
svr.predict(X_test)


Out[77]:
array([ 20.67972551,   2.54835977,  83.53369792, ...,   1.64918347,
         5.23542602,   0.09708299])

In [78]:
s = modelList[0]


---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-78-60f6deb64894> in <module>()
----> 1 s = modelList[0]

NameError: name 'modelList' is not defined

In [ ]:
t.mean(1)

In [ ]:
modelList = [svr,knn,etreg,rfr,xgbr,gbr]

In [ ]:
score_train = [[str(f).split("(")[0],f.score(X_train,y_train)] for f in modelList]
score_test = [[str(f).split("(")[0],f.score(X_test,y_test)] for f in modelList]
for idx,i in enumerate(score_train):
    print(i[0])
    print("   train: %.03f"%i[1])
    print("   test:  %.03f"%score_test[idx][1])

In [ ]:
globalPred = np.array([f.predict(XX) for f in modelList]).T

In [ ]:
globalPred[0]

In [ ]:
y[0]

In [ ]:


In [ ]:
err = (globalPred.mean(1)-y)**2
print(err.sum()/len(err))

In [ ]:
for f in modelList:
    print(str(f).split("(")[0])
    err = (f.predict(XX)-y)**2
    print(err.sum()/len(err))

In [ ]:
for f in modelList:
    print(str(f).split("(")[0])
    print(f.score(XX,y))

In [ ]:
svrMeta = SVR()

In [239]:
%%time
svrMeta = svrMeta.fit(globalPred,y)


CPU times: user 766 ms, sys: 12.4 ms, total: 778 ms
Wall time: 781 ms

In [241]:
err = (svrMeta.predict(globalPred)-y)**2
print(err.sum()/len(err))


288.194411898

Here for legacy


In [41]:
in_dim = len(XX[0])
out_dim = 1  

model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
model.add(Dense(128, input_shape=(in_dim,)))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(1, init='uniform'))
model.add(Activation('linear'))

#sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
#model.compile(loss='mean_squared_error', optimizer=sgd)

rms = RMSprop()
model.compile(loss='mean_squared_error', optimizer=rms)

#model.fit(X_train, y_train, nb_epoch=20, batch_size=16)
#score = model.evaluate(X_test, y_test, batch_size=16)

In [42]:
prep = []
for i in y_train:
    prep.append(min(i,20))

In [43]:
prep=np.array(prep)
mi,ma = prep.min(),prep.max()
fy = (prep-mi) / (ma-mi)
#my = fy.max()
#fy = fy/fy.max()

In [44]:
model.fit(np.array(X_train), fy, batch_size=10, nb_epoch=10, validation_split=0.1)


Train on 2224 samples, validate on 248 samples
Epoch 1/10
2224/2224 [==============================] - 0s - loss: 0.0628 - val_loss: 0.0506
Epoch 2/10
2224/2224 [==============================] - 0s - loss: 0.0537 - val_loss: 0.0509
Epoch 3/10
2224/2224 [==============================] - 0s - loss: 0.0517 - val_loss: 0.0541
Epoch 4/10
2224/2224 [==============================] - 0s - loss: 0.0520 - val_loss: 0.0500
Epoch 5/10
2224/2224 [==============================] - 0s - loss: 0.0519 - val_loss: 0.0525
Epoch 6/10
2224/2224 [==============================] - 0s - loss: 0.0523 - val_loss: 0.0489
Epoch 7/10
2224/2224 [==============================] - 0s - loss: 0.0517 - val_loss: 0.0551
Epoch 8/10
2224/2224 [==============================] - 0s - loss: 0.0512 - val_loss: 0.0498
Epoch 9/10
2224/2224 [==============================] - 0s - loss: 0.0516 - val_loss: 0.0583
Epoch 10/10
2224/2224 [==============================] - 0s - loss: 0.0516 - val_loss: 0.0497
Out[44]:
<keras.callbacks.History at 0x112430c10>

In [45]:
pred = model.predict(np.array(X_test))*ma+mi

In [46]:
err = (pred-y_test)**2
err.sum()/len(err)


Out[46]:
182460.82171163053

In [ ]:
r = random.randrange(len(X_train))
print("(Train) Prediction %0.4f, True: %0.4f"%(model.predict(np.array([X_train[r]]))[0][0]*ma+mi,y_train[r]))

r = random.randrange(len(X_test))
print("(Test)  Prediction %0.4f, True: %0.4f"%(model.predict(np.array([X_test[r]]))[0][0]*ma+mi,y_test[r]))

Predict on testset


In [93]:
%%time
#filename = "data/reduced_test_5000.csv"
filename = "data/test.csv"
test = pd.read_csv(filename)
test = test.set_index('Id')


CPU times: user 16.4 s, sys: 4.88 s, total: 21.3 s
Wall time: 22.3 s

In [94]:
features_columns = list([u'Ref', u'Ref_5x5_10th',
       u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
       u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
       u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
       u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
       u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
       u'Kdp_5x5_50th', u'Kdp_5x5_90th'])

def getX(raw):
    selected_columns = list([ u'minutes_past',u'radardist_km', u'Ref', u'Ref_5x5_10th',
       u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
       u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
       u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
       u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
       u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
       u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
    
    data = raw[selected_columns]
    
    docX= []
    for i in data.index.unique():
        if isinstance(data.loc[i],pd.core.series.Series):
            m = [data.loc[i].as_matrix()]
            docX.append(m)
        else:
            m = data.loc[i].as_matrix()
            docX.append(m)
    X = np.array(docX)
    return X

In [95]:
#%%time
#X=getX(test)

#tmp = []
#for i in X:
#    tmp.append(len(i))
#tmp = np.array(tmp)
#sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
#plt.title("Number of ID per number of observations\n(On test dataset)")
#plt.plot()

In [96]:
testFull = test.dropna()

In [97]:
%%time
X=getX(testFull)  # 1min
#XX = [np.array(t).mean(0) for t in X]  # 10s


CPU times: user 1min 12s, sys: 1.87 s, total: 1min 14s
Wall time: 1min 14s

In [ ]:
XX=addFeatures(X)

In [99]:
pd.DataFrame(gbr.predict(XX)).describe()


Out[99]:
0
count 235515.000000
mean 5.131102
std 7.194517
min -10.455328
25% 1.638401
50% 3.263728
75% 5.932110
max 167.607432

In [100]:
predFull = zip(testFull.index.unique(),gbr.predict(XX))

In [101]:
testNan = test.drop(test[features_columns].dropna(how='all').index)

In [102]:
tmp = np.empty(len(testNan))
tmp.fill(0.445000)   # 50th percentile of full Nan dataset
predNan = zip(testNan.index.unique(),tmp)

In [103]:
testLeft = test.drop(testNan.index.unique()).drop(testFull.index.unique())

In [104]:
tmp = np.empty(len(testLeft))
tmp.fill(1.27)   # 50th percentile of full Nan dataset
predLeft = zip(testLeft.index.unique(),tmp)

In [105]:
len(testFull.index.unique())


Out[105]:
235515

In [106]:
len(testNan.index.unique())


Out[106]:
232148

In [107]:
len(testLeft.index.unique())


Out[107]:
249962

In [108]:
pred = predFull + predNan + predLeft

In [113]:
pred.sort(key=lambda x: x[0], reverse=False)

In [114]:
submission = pd.DataFrame(pred)
submission.columns = ["Id","Expected"]
submission.head()


Out[114]:
Id Expected
0 1 1.270000
1 2 1.270000
2 3 2.361996
3 4 14.492731
4 5 0.445000

In [115]:
submission.loc[submission['Expected']<0,'Expected'] = 0.445

In [116]:
submission.to_csv("submit4.csv",index=False)

In [ ]:


In [73]:
filename = "data/sample_solution.csv"
sol = pd.read_csv(filename)

In [74]:
sol


Out[74]:
Id Expected
0 1 0.085765
1 2 0.000000
2 3 1.594004
3 4 6.913380
4 5 0.000000
5 6 0.173935
6 7 3.219921
7 8 0.867394
8 9 0.000000
9 10 14.182371
10 11 0.911013
11 12 0.034835
12 13 2.733501
13 14 0.709341
14 15 0.000000
15 16 0.000000
16 17 0.000000
17 18 0.393315
18 19 0.291799
19 20 0.000000
20 21 0.000000
21 22 2.031596
22 23 2.236317
23 24 0.000000
24 25 0.000000
25 26 0.009609
26 27 0.020464
27 28 0.000000
28 29 0.000000
29 30 3.074862
... ... ...
717595 717596 2.573862
717596 717597 0.000000
717597 717598 6.580513
717598 717599 0.270776
717599 717600 0.177539
717600 717601 1.133367
717601 717602 0.000000
717602 717603 9.055868
717603 717604 0.633148
717604 717605 17.524065
717605 717606 0.000000
717606 717607 0.000000
717607 717608 0.000000
717608 717609 14.180502
717609 717610 1.387969
717610 717611 0.000000
717611 717612 0.527286
717612 717613 0.000000
717613 717614 0.164476
717614 717615 2.652251
717615 717616 0.302655
717616 717617 0.183060
717617 717618 0.142695
717618 717619 0.000000
717619 717620 0.343296
717620 717621 0.064034
717621 717622 0.000000
717622 717623 1.090277
717623 717624 1.297023
717624 717625 0.000000

717625 rows × 2 columns


In [ ]:
ss = np.array(sol)

In [ ]:
%%time
for a,b in predFull:
    ss[a-1][1]=b

In [ ]:
ss

In [75]:
sub = pd.DataFrame(pred)
sub.columns = ["Id","Expected"]
sub.Id = sub.Id.astype(int)
sub.head()


Out[75]:
Id Expected
0 1 1.270000
1 2 1.270000
2 3 2.378660
3 4 8.851727
4 5 0.445000

In [76]:
sub.to_csv("submit3.csv",index=False)

In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [ ]: