T81-558: Applications of Deep Neural Networks

Class 6: Preprocessing.

Why is Preprocessing Necessary

The feature vector, the input to a model (such as a neural network), must be completely numeric. Converting non-numeric data into numeric is one major component of preprocessing. It is also often important to preprocess numeric values. Scikit-learn provides a large number of preprocessing functions:

However, this is just the beginning. The success of your neural network's predictions is often directly tied to the data representation.

Preprocessing Functions

The following functions will be used in conjunction with TensorFlow to help preprocess the data. Some of these were covered previously, some are new.

It is okay to just use them. For better understanding, try to see how they work.

These functions allow you to build the feature vector for a neural network. Consider the following:

  • Predictors/Inputs
    • Fill any missing inputs with the median for that column. Use missing_median.
    • Encode textual/categorical values with encode_text_dummy or more creative means (see last part of this class session).
    • Encode numeric values with encode_numeric_zscore, encode_numeric_binary or encode_numeric_range.
    • Consider removing outliers: remove_outliers
  • Output
    • Discard rows with missing outputs.
    • Encode textual/categorical values with encode_text_index.
    • Do not encode output numeric values.
    • Consider removing outliers: remove_outliers
  • Produce final feature vectors (x) and expected output (y) with to_xy.

Complete Set of Preprocessing Functions


In [3]:
import pandas as pd
import sklearn.preprocessing
from sklearn.feature_extraction.text import TfidfTransformer

# Encode text values to dummie variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)    
def encode_text_dummy(df,name):
    dummies = pd.get_dummies(df[name])
    for x in dummies.columns:
        dummy_name = "{}-{}".format(name,x)
        df[dummy_name] = dummies[x]
    df.drop(name, axis=1, inplace=True)
    
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue)    
def encode_text_index(df,name): 
    le = preprocessing.LabelEncoder()
    df[name] = le.fit_transform(df[name])
    return le.classes_
                
# Encode a numeric column as zscores    
def encode_numeric_zscore(df,name,mean=None,sd=None):
    if mean is None:
        mean = df[name].mean()
        
    if sd is None:
        sd = df[name].std()
        
    df[name] = (df[name]-mean)/sd
    
# Convert all missing values in the specified column to the median
def missing_median(df, name):
    med = df[name].median()
    df[name] = df[name].fillna(med)
    
# Remove all rows where the specified column is +/- sd standard deviations
def remove_outliers(df, name, sd):
    drop_rows = df.index[(np.abs(df[name]-df[name].mean())>=(sd*df[name].std()))]
    df.drop(drop_rows,axis=0,inplace=True)
    
# Encode a column to a range between normalized_low and normalized_high.
def encode_numeric_range(df, name, normalized_low =-1, normalized_high =1, 
                         data_low=None, data_high=None):
    
    if data_low is None:
        data_low = min(df[name])
        data_high = max(df[name])
    
    df[name] = ((df[name] - data_low) / (data_high - data_low)) \
                * (normalized_high - normalized_low) + normalized_low

# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df,target):
    result = []
    for x in df.columns:
        if x != target:
            result.append(x)

    # find out the type of the target column.  Is it really this hard? :(
    target_type = df[target].dtypes
    target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type
    
    # Encode to int for classification, float otherwise. TensorFlow likes 32 bits.
    if target_type in (np.int64, np.int32):
        # Classification
        return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.int32)
    else:
        # Regression
        return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.float32)

Analyzing a Dataset

The following script can be used to give a high level overview of how a dataset appears.


In [4]:
ENCODING = 'utf-8'

def expand_categories(values):
    result = []
    s = values.value_counts()
    t = float(len(values))
    for v in s.index:
        result.append("{}:{}%".format(v,round(100*(s[v]/t),2)))
    return "[{}]".format(",".join(result))
        
def analyze(filename):
    print()
    print("Analyzing: {}".format(filename))
    df = pd.read_csv(filename,encoding=ENCODING)
    cols = df.columns.values
    total = float(len(df))

    print("{} rows".format(int(total)))
    for col in cols:
        uniques = df[col].unique()
        unique_count = len(uniques)
        if unique_count>100:
            print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100)))
        else:
            print("** {}:{}".format(col,expand_categories(df[col])))
            expand_categories(df[col])

The analyze script can be run on the MPG dataset.


In [5]:
import tensorflow.contrib.learn as skflow
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore

path = "./data/"

filename_read = os.path.join(path,"auto-mpg.csv")
analyze(filename_read)


Analyzing: ./data/auto-mpg.csv
398 rows
** mpg:129 (32%)
** cylinders:[4:51.26%,8:25.88%,6:21.11%,3:1.01%,5:0.75%]
** displacement:[97.0:5.28%,350.0:4.52%,98.0:4.52%,250.0:4.27%,318.0:4.27%,140.0:4.02%,400.0:3.27%,225.0:3.27%,91.0:3.02%,121.0:2.76%,302.0:2.76%,232.0:2.76%,151.0:2.51%,120.0:2.26%,200.0:2.01%,351.0:2.01%,85.0:2.01%,90.0:2.01%,231.0:2.01%,122.0:1.76%,105.0:1.76%,304.0:1.76%,79.0:1.51%,156.0:1.51%,119.0:1.51%,258.0:1.26%,89.0:1.26%,107.0:1.26%,108.0:1.26%,135.0:1.26%,360.0:1.01%,86.0:1.01%,134.0:1.01%,116.0:1.01%,112.0:1.01%,305.0:1.01%,70.0:0.75%,113.0:0.75%,455.0:0.75%,307.0:0.75%,168.0:0.75%,198.0:0.75%,146.0:0.75%,260.0:0.75%,173.0:0.75%,429.0:0.75%,199.0:0.5%,141.0:0.5%,163.0:0.5%,262.0:0.5%,71.0:0.5%,440.0:0.5%,383.0:0.5%,88.0:0.25%,97.5:0.25%,340.0:0.25%,144.0:0.25%,390.0:0.25%,83.0:0.25%,96.0:0.25%,80.0:0.25%,78.0:0.25%,76.0:0.25%,72.0:0.25%,81.0:0.25%,104.0:0.25%,267.0:0.25%,100.0:0.25%,101.0:0.25%,110.0:0.25%,111.0:0.25%,183.0:0.25%,181.0:0.25%,114.0:0.25%,115.0:0.25%,171.0:0.25%,155.0:0.25%,130.0:0.25%,131.0:0.25%,145.0:0.25%,454.0:0.25%,68.0:0.25%]
** horsepower:[150:5.53%,90:5.03%,88:4.77%,110:4.52%,100:4.27%,75:3.52%,95:3.52%,105:3.02%,67:3.02%,70:3.02%,65:2.51%,85:2.26%,97:2.26%,145:1.76%,80:1.76%,140:1.76%,72:1.51%,84:1.51%,68:1.51%,78:1.51%,92:1.51%,?:1.51%,115:1.26%,130:1.26%,180:1.26%,175:1.26%,60:1.26%,170:1.26%,71:1.26%,86:1.26%,165:1.01%,52:1.01%,120:1.01%,83:1.01%,76:1.01%,48:0.75%,125:0.75%,190:0.75%,112:0.75%,69:0.75%,74:0.75%,63:0.75%,215:0.75%,96:0.75%,225:0.75%,139:0.5%,153:0.5%,87:0.5%,198:0.5%,53:0.5%,79:0.5%,129:0.5%,98:0.5%,155:0.5%,160:0.5%,81:0.5%,58:0.5%,62:0.5%,46:0.5%,66:0.25%,91:0.25%,142:0.25%,102:0.25%,116:0.25%,89:0.25%,167:0.25%,137:0.25%,135:0.25%,49:0.25%,149:0.25%,61:0.25%,133:0.25%,148:0.25%,220:0.25%,103:0.25%,77:0.25%,64:0.25%,158:0.25%,54:0.25%,208:0.25%,132:0.25%,152:0.25%,122:0.25%,210:0.25%,93:0.25%,200:0.25%,94:0.25%,108:0.25%,230:0.25%,193:0.25%,138:0.25%,107:0.25%,113:0.25%,82:0.25%]
** weight:351 (88%)
** acceleration:[14.5:5.78%,15.5:5.28%,16.0:4.02%,14.0:4.02%,13.5:3.77%,15.0:3.52%,17.0:3.52%,16.5:3.27%,13.0:3.02%,19.0:3.02%,12.0:2.51%,16.4:2.26%,12.5:2.01%,18.0:2.01%,11.0:1.76%,11.5:1.76%,14.9:1.76%,15.8:1.76%,13.2:1.51%,19.5:1.51%,17.3:1.26%,21.0:1.26%,18.5:1.26%,14.4:1.26%,18.2:1.26%,14.7:1.26%,16.9:1.01%,15.7:1.01%,17.6:1.01%,18.6:1.01%,10.0:1.01%,15.4:1.01%,16.2:1.01%,17.5:1.01%,16.7:0.75%,12.8:0.75%,20.5:0.75%,19.4:0.75%,15.3:0.75%,17.7:0.75%,14.2:0.75%,16.6:0.75%,14.8:0.75%,15.2:0.75%,19.2:0.75%,13.4:0.5%,13.7:0.5%,18.7:0.5%,9.5:0.5%,12.6:0.5%,8.5:0.5%,20.1:0.5%,12.2:0.5%,12.9:0.5%,19.6:0.5%,14.3:0.5%,11.4:0.5%,15.1:0.5%,17.2:0.5%,22.2:0.5%,13.6:0.5%,16.8:0.5%,17.4:0.5%,13.8:0.5%,15.9:0.5%,13.9:0.5%,17.8:0.5%,24.8:0.25%,10.5:0.25%,9.0:0.25%,23.5:0.25%,11.3:0.25%,16.1:0.25%,20.4:0.25%,14.1:0.25%,18.8:0.25%,11.2:0.25%,12.1:0.25%,18.1:0.25%,21.8:0.25%,21.5:0.25%,20.7:0.25%,19.9:0.25%,17.1:0.25%,24.6:0.25%,8.0:0.25%,23.7:0.25%,22.1:0.25%,21.7:0.25%,11.1:0.25%,18.3:0.25%,11.6:0.25%,17.9:0.25%,15.6:0.25%,21.9:0.25%]
** year:[73:10.05%,78:9.05%,76:8.54%,82:7.79%,75:7.54%,81:7.29%,80:7.29%,79:7.29%,70:7.29%,77:7.04%,72:7.04%,71:7.04%,74:6.78%]
** origin:[1:62.56%,3:19.85%,2:17.59%]
** name:305 (76%)

Preprocessing Examples

The above preprocessing functions can be used in a variety of ways.


In [6]:
import tensorflow.contrib.learn as skflow
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore

path = "./data/"

filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])

# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_range(df, 'cylinders',0,1)
encode_numeric_range(df, 'displacement',0,1)
encode_numeric_zscore(df, 'acceleration')
#encode_numeric_binary(df,'mpg',20)
#df['origin'] = df['origin'].astype(str)
#encode_text_tfidf(df, 'origin')

# Drop outliers in horsepower
print("Length before MPG outliers dropped: {}".format(len(df)))
remove_outliers(df,'mpg',2)
print("Length after MPG outliers dropped: {}".format(len(df)))

print(df)


Length before MPG outliers dropped: 398
Length after MPG outliers dropped: 388
      mpg  cylinders  displacement  horsepower    weight  acceleration  year  \
0    18.0        1.0      0.617571    0.672271  0.630077     -1.293870    70   
1    15.0        1.0      0.728682    1.587959  0.853259     -1.475181    70   
2    18.0        1.0      0.645995    1.195522  0.549778     -1.656492    70   
3    16.0        1.0      0.609819    1.195522  0.546236     -1.293870    70   
4    17.0        1.0      0.604651    0.933897  0.565130     -1.837804    70   
5    15.0        1.0      0.932817    2.451322  1.618455     -2.019115    70   
6    14.0        1.0      0.997416    3.026898  1.633806     -2.381737    70   
7    14.0        1.0      0.961240    2.896085  1.584210     -2.563048    70   
8    14.0        1.0      1.000000    3.157710  1.717647     -2.019115    70   
9    15.0        1.0      0.832041    2.242022  1.038654     -2.563048    70   
10   15.0        1.0      0.813953    1.718772  0.699747     -2.019115    70   
11   14.0        1.0      0.702842    1.457147  0.754067     -2.744360    70   
12   15.0        1.0      0.857881    1.195522  0.933557     -2.200426    70   
13   14.0        1.0      1.000000    3.157710  0.136478     -2.019115    70   
14   24.0        0.2      0.116279   -0.243417 -0.706655     -0.206002    70   
15   22.0        0.6      0.335917   -0.243417 -0.162279     -0.024691    70   
16   18.0        0.6      0.338501   -0.191092 -0.231950     -0.024691    70   
17   21.0        0.6      0.341085   -0.505042 -0.452770      0.156620    70   
18   27.0        0.2      0.074935   -0.426554 -0.992422     -0.387314    70   
19   26.0        0.2      0.074935   -1.525380 -1.340775      1.788421    70   
20   25.0        0.2      0.108527   -0.452717 -0.352397      0.700554    70   
21   24.0        0.2      0.100775   -0.374229 -0.638165     -0.387314    70   
22   25.0        0.2      0.093023   -0.243417 -0.703112      0.700554    70   
23   26.0        0.2      0.136951    0.227509 -0.869613     -1.112559    70   
24   21.0        0.6      0.338501   -0.374229 -0.380738     -0.206002    70   
25   10.0        1.0      0.754522    2.896085  1.942010     -0.568625    70   
26   10.0        1.0      0.617571    2.503648  1.659785     -0.206002    70   
27   11.0        1.0      0.645995    2.765273  1.666870     -0.749936    70   
28    9.0        1.0      0.609819    2.320510  2.080171      1.063176    70   
29   27.0        0.2      0.074935   -0.426554 -0.992422     -0.387314    71   
..    ...        ...           ...         ...       ...           ...   ...   
367  28.0        0.2      0.113695   -0.426554 -0.431515      1.462061    82   
368  27.0        0.2      0.113695   -0.426554 -0.390185      1.099439    82   
369  34.0        0.2      0.113695   -0.426554 -0.679495      0.881865    82   
370  31.0        0.2      0.113695   -0.505042 -0.466940      0.229145    82   
371  29.0        0.2      0.173127   -0.531204 -0.525983      0.156620    82   
372  27.0        0.2      0.214470   -0.374229 -0.278003      0.881865    82   
373  24.0        0.2      0.186047   -0.321904 -0.124492      0.301669    82   
374  23.0        0.2      0.214470   -0.282660  0.076254      1.788421    82   
375  36.0        0.2      0.095607   -0.792829 -1.169551     -0.097216    82   
376  37.0        0.2      0.059432   -0.949804 -1.116412      0.954390    82   
377  31.0        0.2      0.059432   -0.949804 -1.181360      0.736816    82   
378  38.0        0.2      0.095607   -1.080617 -0.998327     -0.314789    82   
379  36.0        0.2      0.077519   -0.897479 -0.998327      0.628029    82   
380  36.0        0.2      0.134367   -0.426554 -0.956997     -0.387314    82   
381  36.0        0.2      0.100775   -0.766667 -0.903858     -0.387314    82   
382  34.0        0.2      0.103359   -0.897479 -0.856624      0.482980    82   
383  38.0        0.2      0.059432   -0.975967 -1.187264     -0.206002    82   
384  32.0        0.2      0.059432   -0.975967 -1.187264      0.047833    82   
385  38.0        0.2      0.059432   -0.975967 -1.151838      0.229145    82   
386  25.0        0.6      0.291990    0.149021 -0.030023      0.301669    82   
387  38.0        0.6      0.501292   -0.505042  0.052637      0.519243    82   
388  26.0        0.2      0.227390   -0.321904 -0.455132     -0.387314    82   
389  22.0        0.6      0.423773    0.201346 -0.159917     -0.314789    82   
390  32.0        0.2      0.196382   -0.217254 -0.360663     -0.604887    82   
391  36.0        0.2      0.173127   -0.531204 -0.709016     -0.931247    82   
392  27.0        0.2      0.214470   -0.374229 -0.024119      0.628029    82   
393  27.0        0.2      0.186047   -0.478879 -0.213056      0.011571    82   
395  32.0        0.2      0.173127   -0.531204 -0.797581     -1.438919    82   
396  28.0        0.2      0.134367   -0.662017 -0.407897      1.099439    82   
397  31.0        0.2      0.131783   -0.583529 -0.295716      1.389537    82   

     origin  
0         1  
1         1  
2         1  
3         1  
4         1  
5         1  
6         1  
7         1  
8         1  
9         1  
10        1  
11        1  
12        1  
13        1  
14        3  
15        1  
16        1  
17        1  
18        3  
19        2  
20        2  
21        2  
22        2  
23        2  
24        1  
25        1  
26        1  
27        1  
28        1  
29        3  
..      ...  
367       1  
368       1  
369       1  
370       1  
371       1  
372       1  
373       1  
374       1  
375       2  
376       3  
377       3  
378       1  
379       1  
380       3  
381       3  
382       3  
383       3  
384       3  
385       3  
386       1  
387       1  
388       1  
389       1  
390       3  
391       1  
392       1  
393       1  
395       1  
396       1  
397       1  

[388 rows x 8 columns]

Other Examples: Dealing with Addresses

Addresses can be difficult to encode into a neural network. There are many different approaches, and you must consider how you can transform the address into something more meaningful. Map coordinates can be a good approach. Latitude and longitude can be a useful encoding. Thanks to the power of the Internet, it is relatively easy to transform an address into its latitude and longitude values. The following code determines the coordinates of Washington University:


In [1]:
import requests

address = "1 Brookings Dr, St. Louis, MO 63130"

response = requests.get('https://maps.googleapis.com/maps/api/geocode/json?address='+address)

resp_json_payload = response.json()

print(resp_json_payload['results'][0]['geometry']['location'])


{'lat': 38.6470653, 'lng': -90.30263459999999}

If latitude and longitude are simply fed into the neural network as two features, they might not be overly helpful. These two values would allow your neural network to cluster locations on a map. Sometimes cluster locations on a map can be useful. Consider the percentage of the population that smokes in the USA by state:

The above map shows that certian behaviors, like smoking, can be clustered by global region.

However, often you will want to transform the coordinates into distances. It is reasonably easy to estimate the distance between any two points on Earth by using the great circle distance between any two points on a sphere:

The following code implements this formula:

$\Delta\sigma=\arccos\bigl(\sin\phi_1\cdot\sin\phi_2+\cos\phi_1\cdot\cos\phi_2\cdot\cos(\Delta\lambda)\bigr)$

$d = r \, \Delta\sigma$


In [7]:
from math import sin, cos, sqrt, atan2, radians

# Distance function
def distance_lat_lng(lat1,lng1,lat2,lng2):
    # approximate radius of earth in km
    R = 6373.0

    # degrees to radians (lat/lon are in degrees)
    lat1 = radians(lat1)
    lng1 = radians(lng1)
    lat2 = radians(lat2)
    lng2 = radians(lng2)

    dlng = lng2 - lng1
    dlat = lat2 - lat1

    a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlng / 2)**2
    c = 2 * atan2(sqrt(a), sqrt(1 - a))

    return R * c

# Find lat lon for address
def lookup_lat_lng(address):
    response = requests.get('https://maps.googleapis.com/maps/api/geocode/json?address='+address)
    json = response.json()
    if len(json['results']) == 0:
        print("Can't find: {}".format(address))
        return 0,0
    map = json['results'][0]['geometry']['location']
    return map['lat'],map['lng']


# Distance between two locations

import requests

address1 = "1 Brookings Dr, St. Louis, MO 63130" 
address2 = "3301 College Ave, Fort Lauderdale, FL 33314"

lat1, lng1 = lookup_lat_lng(address1)
lat2, lng2 = lookup_lat_lng(address2)

print("Distance, St. Louis, MO to Ft. Lauderdale, FL: {} km".format(
        distance_lat_lng(lat1,lng1,lat2,lng2)))


Distance, St. Louis, MO to Ft. Lauderdale, FL: 1685.0833252717607 km

Distances can be useful to encode addresses as. You must consider what distance might be useful for your dataset. Consider:

  • Distance to major metropolitan area
  • Distance to competitor
  • Distance to distribution center
  • Distance to retail outlet

The following code calculates the distance between 10 universities and washu:


In [26]:
# Encoding other universities by their distance to Washington University

schools = [
    ["Princeton University, Princeton, NJ 08544", 'Princeton'],
    ["Massachusetts Hall, Cambridge, MA 02138", 'Harvard'],
    ["5801 S Ellis Ave, Chicago, IL 60637", 'University of Chicago'],
    ["Yale, New Haven, CT 06520", 'Yale'],
    ["116th St & Broadway, New York, NY 10027", 'Columbia University'],
    ["450 Serra Mall, Stanford, CA 94305", 'Stanford'],
    ["77 Massachusetts Ave, Cambridge, MA 02139", 'MIT'],
    ["Duke University, Durham, NC 27708", 'Duke University'],
    ["University of Pennsylvania, Philadelphia, PA 19104", 'University of Pennsylvania'],
    ["Johns Hopkins University, Baltimore, MD 21218", 'Johns Hopkins']
]

lat1, lng1 = lookup_lat_lng("1 Brookings Dr, St. Louis, MO 63130")

for address, name in schools:
    lat2,lng2 = lookup_lat_lng(address)
    dist = distance_lat_lng(lat1,lng1,lat2,lng2)
    print("School '{}', distance to wustl is: {}".format(name,dist))


School 'Princeton', distance to wustl is: 1354.209708261112
School 'Harvard', distance to wustl is: 1670.48400266576
School 'University of Chicago', distance to wustl is: 418.0768183943189
School 'Yale', distance to wustl is: 1504.9478116980558
School 'Columbia University', distance to wustl is: 1021.5557486863092
School 'Stanford', distance to wustl is: 2781.0358215314873
School 'MIT', distance to wustl is: 1671.8200768854172
School 'Duke University', distance to wustl is: 1047.4669155948627
School 'University of Pennsylvania', distance to wustl is: 1306.6967081436705
School 'Johns Hopkins', distance to wustl is: 1185.939948468073

Other Examples: Bag of Words

The Bag of Words algorithm is a common means of encoding strings. (Harris, 1954) Each input represents the count of one particular word. The entire input vector would contain one value for each unique word. Consider the following strings.

Of Mice and Men
Three Blind Mice
Blind Man’s Bluff
Mice and More Mice

We have the following unique words. This is our “dictionary.”

Input 0 : and
Input 1 : blind
Input 2 : bluff
Input 3 : man’s
Input 4 : men
Input 5 : mice
Input 6 : more
Input 7 : of
Input 8 : three

The four lines above would be encoded as follows.

Of Mice and Men [ 0 4 5 7 ]
Three Blind Mice [ 1 5 8 ]
Blind Man ’ s Bl u f f [ 1 2 3 ]
Mice and More Mice [ 0 5 6 ]

Of course we have to fill in the missing words with zero, so we end up with the following.

  • Of Mice and Men [ 1 , 0 , 0 , 0 , 1 , 1 , 0 , 1 , 0 ]
  • Three Blind Mice [ 0 , 1 , 0 , 0 , 0 , 1 , 0 , 0 , 1 ]
  • Blind Man’s Bluff [ 0 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 ]
  • Mice and More Mice [ 1 , 0 , 0 , 0 , 0 , 2 , 1 , 0 , 0 ]

Notice that we now have a consistent vector length of nine. Nine is the total number of words in our “dictionary”. Each component number in the vector is an index into our dictionary of available words. At each vector component is stored a count of the number of words for that dictionary entry. Each string will usually contain only a small subset of the dictionary. As a result, most of the vector values will be zero.

As you can see, one of the most difficult aspects of machine learning programming is translating your problem into a fixed-length array of floating point numbers. The following section shows how to translate several examples.


In [40]:
from sklearn.feature_extraction.text import CountVectorizer

corpus = [
    'This is the first document.',
    'This is the second second document.',
    'And the third one.',
    'Is this the first document?']

vectorizer = CountVectorizer(min_df=1)

vectorizer.fit(corpus)

print("Mapping")
print(vectorizer.vocabulary_)

print()
print("Encoded")
x = vectorizer.transform(corpus)
print(x.toarray())


Mapping
{'third': 7, 'this': 8, 'the': 6, 'first': 2, 'document': 1, 'second': 5, 'one': 4, 'is': 3, 'and': 0}

Encoded
[[0 1 1 1 0 0 1 0 1]
 [0 1 0 1 0 2 1 0 1]
 [1 0 0 0 1 0 1 1 0]
 [0 1 1 1 0 0 1 0 1]]

In [27]:
from sklearn.feature_extraction.text import CountVectorizer

path = "./data/"

filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])

corpus = df['name']

vectorizer = CountVectorizer(min_df=1)

vectorizer.fit(corpus)

print("Mapping")
print(vectorizer.vocabulary_)

print()
print("Encoded")
x = vectorizer.transform(corpus)
print(x.toarray())

print(len(vectorizer.vocabulary_))

# reverse lookup for columns
bag_cols = [0] * len(vectorizer.vocabulary_)
for i,key in enumerate(vectorizer.vocabulary_):
    bag_cols[i] = key


Mapping
{'99e': 56, 'gran': 151, 'c20': 77, 'valiant': 284, 'peugeot': 215, 'concours': 104, 'ciera': 97, 'vokswagen': 288, 'new': 206, '100': 1, 'liftback': 176, 'tc3': 271, 'arrow': 64, 'beetle': 70, 'premier': 223, 'matador': 190, 'b210': 69, 'skylark': 251, 'seville': 248, '2000': 21, 'volvo': 291, 'nissan': 208, 'astro': 66, 'dart': 121, 'luxus': 181, 'rabbit': 225, 'rampage': 226, 'capri': 80, 'cvcc': 118, 'grand': 153, 'c10': 76, 'v8': 283, 'se': 245, 'tr7': 278, 'camaro': 79, 'lemans': 174, '304': 35, 'sunbird': 265, 'zx': 299, '124b': 9, 'classic': 100, 'ford': 143, 'concord': 103, 'nova': 209, 'challenger': 88, 'supreme': 267, 'air': 60, 'chevy': 95, 'accord': 59, 'prix': 224, 'gx': 158, '5000s': 46, 'lebaron': 172, 'chevette': 92, 'aspen': 65, 'wagon': 293, 'audi': 67, 'cressida': 112, '244dl': 28, 'hatchback': 160, '1500': 16, '280s': 32, '131': 13, 'granada': 152, 'cuda': 115, 'woody': 294, 'fiesta': 141, 'thunderbird': 273, 'colt': 102, '111': 3, 'starlet': 261, 'cordoba': 105, 'monte': 202, '1200d': 7, 'amc': 62, 'sport': 254, 'maxima': 193, '610': 51, '200sx': 23, 'reliant': 231, 'ltd': 180, 'corolla': 106, 'hardtop': 159, 'type': 281, 'maxda': 192, 'chevrolet': 94, '340': 39, 'mazda': 194, 'duster': 132, '240d': 27, 'pickup': 217, 'pontiac': 221, 'manta': 187, '210': 24, 'skyhawk': 250, 'chevelle': 91, 'stanza': 259, 'citation': 98, 'xe': 296, 'pinto': 218, 'jetta': 170, 'auto': 68, 'mercury': 197, 'phoenix': 216, 'eldorado': 133, '810': 54, 'cruiser': 114, 'monarch': 201, 'datsun': 123, 'saab': 239, 'f108': 137, 'coupe': 111, '5000': 45, 'sj': 249, 'torino': 274, 'miser': 198, 'opel': 213, 'gt': 156, 'galaxie': 147, 'fox': 144, 'lj': 178, 'yorker': 297, 'pl510': 219, 'cavalier': 85, 'dpl': 131, '144ea': 14, 'fairmont': 139, 'sst': 257, '1131': 4, 'hi': 161, 'cobra': 101, 'prelude': 222, 'omni': 212, 'gl': 149, 'cadillac': 78, 'vista': 287, 'volare': 289, 'rx2': 237, '1900': 19, '12tl': 11, 'cricket': 113, 'turbo': 280, 'maverick': 191, 'volkswagen': 290, 'super': 266, '411': 42, 'lx': 182, 'j2000': 169, 'subaru': 263, '18i': 18, 'estate': 136, 'lynx': 183, '310': 36, 'century': 87, '10': 0, 'st': 258, 'dl': 128, 'd100': 119, '505s': 48, 'marquis': 189, 'town': 275, 'runabout': 235, 'f250': 138, 'landau': 171, 'civic': 99, 'toyota': 276, 'brougham': 74, 'sedan': 247, '2h': 33, 'monza': 203, 'buick': 75, 'impala': 167, 'iii': 166, 'squire': 256, 'renault': 232, '245': 29, 'diesel': 126, '200': 20, '4w': 43, 'caprice': 81, 'honda': 162, 'ls': 179, 'satellite': 243, 'ii': 165, 'medallion': 195, 'ambassador': 61, 'magnum': 184, 'monaco': 200, 'special': 252, '1600': 17, 'sportabout': 255, 'aries': 63, '280': 31, 'tc': 270, 'fury': 145, '510': 49, '99le': 58, '145e': 15, 'regal': 229, 'triumph': 279, '710': 53, 'oldsmobile': 210, 'lesabre': 175, '626': 52, '350': 40, '100ls': 2, 'ventura': 286, 'celica': 86, 'sapporo': 242, 'newport': 207, 'mark': 188, '12': 5, 'royale': 234, 'charger': 90, 'gremlin': 154, '2002': 22, 'plymouth': 220, '604sl': 50, 'royal': 233, '504': 47, 'bmw': 73, '320': 37, 'rebel': 228, 'strada': 262, 'delta': 124, '88': 55, 'chevroelt': 93, 'escort': 135, 'sx': 269, 'man': 186, 'carina': 82, '264gl': 30, 'x1': 295, 'electra': 134, 'ranger': 227, 'sebring': 246, '124': 8, 'tercel': 272, 'regis': 230, '300d': 34, 'glc': 150, 'deluxe': 125, 'salon': 241, 'fiat': 140, '99gle': 57, 'carlo': 83, 'v6': 282, 'mpg': 204, 'model': 199, '128': 10, 'vega': 285, 'malibu': 185, '500': 44, 'zephyr': 298, 'cougar': 109, 'toyouta': 277, 'chrysler': 96, 'starfire': 260, '225': 25, 'vw': 292, 'champ': 89, 'coronet': 108, 'safari': 240, 'pacer': 214, 'lecar': 173, 'gs': 155, 'mustang': 205, 'bel': 71, 'horizon': 163, '4000': 41, 'corona': 107, '1300': 12, 'omega': 211, 'rx': 236, 'isuzu': 168, 'cutlass': 117, 'hornet': 164, '1200': 6, 'dasher': 122, 'ghia': 148, 'door': 130, '320i': 38, 'gtl': 157, '2300': 26, 'd200': 120, 'custom': 116, 'futura': 146, 'firebird': 142, 'sw': 268, 'scirocco': 244, 'country': 110, 'suburb': 264, 'benz': 72, 'diplomat': 127, 'rx3': 238, 'limited': 177, 'catalina': 84, 'spirit': 253, 'dodge': 129, 'mercedes': 196}

Encoded
[[0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]
 ..., 
 [0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]
 [1 0 0 ..., 0 0 0]]
300

In [32]:
import matplotlib.pyplot as plt
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import RandomForestRegressor

#x = x.toarray() #.as_matrix()
y = df['mpg'].as_matrix()

# Build a forest and compute the feature importances
forest = RandomForestRegressor(n_estimators=50,
                              random_state=0, verbose = True)
forest.fit(x, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
             axis=0)
indices = np.argsort(importances)[::-1]

# Print the feature ranking
print("Feature ranking:")

for f in range(x.shape[1]):
    print("{}. {} ({})".format(f + 1, bag_cols[f], importances[indices[f]]))


Feature ranking:
1. 99e (0.069858186873746)
2. gran (0.0657962678049988)
3. c20 (0.051008883870636394)
4. valiant (0.04581221475540706)
5. peugeot (0.04198230223754908)
6. concours (0.040399764416811264)
7. ciera (0.032608231867797544)
8. vokswagen (0.028083396361630455)
9. new (0.025365169053742708)
10. 100 (0.020822620576517337)
11. liftback (0.02065610924024198)
12. tc3 (0.019690892181255654)
13. arrow (0.019012669846068458)
14. beetle (0.01750032394359091)
15. premier (0.014938020006637028)
16. matador (0.014369710481955338)
17. b210 (0.01252127394359739)
18. skylark (0.010751429685292812)
19. seville (0.010702428667434061)
20. 2000 (0.010542674460379622)
21. volvo (0.01025593869126505)
22. nissan (0.00986759162389823)
23. astro (0.009133861291584486)
24. dart (0.008772300799934196)
25. luxus (0.008600514953250088)
26. rabbit (0.007821470049179373)
27. rampage (0.007294524170328526)
28. capri (0.007132502583116468)
29. cvcc (0.007099310584072058)
30. grand (0.007078688726990171)
31. c10 (0.006707488660010955)
32. v8 (0.006467280930991713)
33. se (0.006412846067598118)
34. tr7 (0.006102704354248814)
35. camaro (0.006065784714276979)
36. lemans (0.005627410767934647)
37. 304 (0.00516947845622533)
38. sunbird (0.005069313937771301)
39. zx (0.004891430604387839)
40. 124b (0.0048092522191788886)
41. classic (0.004780536832539579)
42. ford (0.004777631333172296)
43. concord (0.004716392725717725)
44. nova (0.004541898767747425)
45. challenger (0.004414682290267593)
46. supreme (0.004409132210097985)
47. air (0.00435238709020031)
48. chevy (0.004323360548335858)
49. accord (0.004213496746988927)
50. prix (0.004143793068001074)
51. gx (0.0039554358634945)
52. 5000s (0.0038994215363000865)
53. lebaron (0.0038913497104200736)
54. chevette (0.0037898467657427794)
55. aspen (0.0037731221796846654)
56. wagon (0.003684782256319336)
57. audi (0.0036779690717969136)
58. cressida (0.0036411312749935355)
59. 244dl (0.003640013972001834)
60. hatchback (0.0035686123303577694)
61. 1500 (0.003521342109599453)
62. 280s (0.00335845979581203)
63. 131 (0.003257560971158881)
64. granada (0.003229361061909285)
65. cuda (0.003227396835646353)
66. woody (0.003186537784297465)
67. fiesta (0.003120437759660097)
68. thunderbird (0.003087949376346005)
69. colt (0.0030263112449864164)
70. 111 (0.003016106870573076)
71. starlet (0.00276047271574072)
72. cordoba (0.0026978828592652574)
73. monte (0.002688867170056436)
74. 1200d (0.0026693369993517064)
75. amc (0.0026662244569612805)
76. sport (0.0026445934235147816)
77. maxima (0.0026216297561914865)
78. 610 (0.0025598751661845055)
79. 200sx (0.002390764949718666)
80. reliant (0.002364731483091029)
81. ltd (0.0023635763066745387)
82. corolla (0.002332253684585771)
83. hardtop (0.0022248933570720693)
84. type (0.002216468454990445)
85. maxda (0.0021936686750714434)
86. chevrolet (0.002184004329724791)
87. 340 (0.002183691148679262)
88. mazda (0.0021542648811175066)
89. duster (0.002108274734943049)
90. 240d (0.002086924858157528)
91. pickup (0.0020084606226251963)
92. pontiac (0.0019475447287013933)
93. manta (0.0018696525404742726)
94. 210 (0.0018685131367991584)
95. skyhawk (0.0018599256451845906)
96. chevelle (0.0018423867732761472)
97. stanza (0.001793781963878932)
98. citation (0.0017536657673044524)
99. xe (0.0016924739724648039)
100. pinto (0.0016821146168706524)
101. jetta (0.0015973634676595821)
102. auto (0.001567377423866302)
103. mercury (0.001553247828919427)
104. phoenix (0.0015515934732918935)
105. eldorado (0.0015470946435324406)
106. 810 (0.0015367331183517652)
107. cruiser (0.0015015773127053352)
108. monarch (0.0014946977354463977)
109. datsun (0.001480305299049996)
110. saab (0.0014415054475451015)
111. f108 (0.00143893424957812)
112. coupe (0.0014242645444977073)
113. 5000 (0.001412836473478591)
114. sj (0.0013666167020922157)
115. torino (0.0013331243291685993)
116. miser (0.0013027672109378438)
117. opel (0.0012971440668855733)
118. gt (0.0012731269372647292)
119. galaxie (0.0012646122310959564)
120. fox (0.0012379424623078653)
121. lj (0.0012011138723334507)
122. yorker (0.0011934146136674873)
123. pl510 (0.0011922624040994222)
124. cavalier (0.0011726956119608016)
125. dpl (0.0011673732265750612)
126. 144ea (0.0011424368454951254)
127. fairmont (0.0011180427415947566)
128. sst (0.0010939718737583688)
129. 1131 (0.0010900464617843424)
130. hi (0.0010895775853801042)
131. cobra (0.0010683834496410622)
132. prelude (0.0010680718093904213)
133. omni (0.0010621788920972875)
134. gl (0.001042064702062226)
135. cadillac (0.0010259708332757275)
136. vista (0.0010223971383224748)
137. volare (0.0010195827581511107)
138. rx2 (0.0010178621729532158)
139. 1900 (0.00101029552067883)
140. 12tl (0.0009779689627034522)
141. cricket (0.0009766221620128143)
142. turbo (0.0009264485726357364)
143. maverick (0.000911111257415155)
144. volkswagen (0.0009044219263565262)
145. super (0.0008865789383651361)
146. 411 (0.0008796177789892935)
147. lx (0.0008774564959933761)
148. j2000 (0.0008766206603768816)
149. subaru (0.000829124566083776)
150. 18i (0.0008199241801540197)
151. estate (0.0008189603660715666)
152. lynx (0.0008175928174699019)
153. 310 (0.000811079939525088)
154. century (0.0007918129610193799)
155. 10 (0.0007799838827481231)
156. st (0.0007763841628620652)
157. dl (0.0007650194191378492)
158. d100 (0.0007564940897592104)
159. 505s (0.0007560080585707475)
160. marquis (0.0007446462138476084)
161. town (0.0007377487862693738)
162. runabout (0.0007271955691454409)
163. f250 (0.000693953567409178)
164. landau (0.0006919151700483239)
165. civic (0.0006918116301433932)
166. toyota (0.0006769789279974073)
167. brougham (0.0006757742311926802)
168. sedan (0.0006439371075384423)
169. 2h (0.0006365554362269344)
170. monza (0.0006246597107919899)
171. buick (0.0006208528466254748)
172. impala (0.0006124576210108167)
173. iii (0.0006122265760668993)
174. squire (0.0005985094949514362)
175. renault (0.0005928129188422803)
176. 245 (0.0005433360409432352)
177. diesel (0.0005413759188962672)
178. 200 (0.000514768693542608)
179. 4w (0.0005041481719848354)
180. caprice (0.0004961670385032416)
181. honda (0.0004934322940419157)
182. ls (0.0004896013460710245)
183. satellite (0.0004855042133954258)
184. ii (0.00047324355013794007)
185. medallion (0.0004636553214286276)
186. ambassador (0.0004583510713887751)
187. magnum (0.0004556646549686148)
188. monaco (0.00044915951815501903)
189. special (0.0004486808906455319)
190. 1600 (0.00044772894391571034)
191. sportabout (0.0004405517411809396)
192. aries (0.0004392824811697179)
193. 280 (0.0004341001012296668)
194. tc (0.0004339938486215451)
195. fury (0.00042944096844703015)
196. 510 (0.0004151566108737538)
197. 99le (0.000410060989628179)
198. 145e (0.0004085692340215761)
199. regal (0.00039946490616565633)
200. triumph (0.0003982892161702556)
201. 710 (0.0003979445287500527)
202. oldsmobile (0.0003958079997130763)
203. lesabre (0.0003684072183124499)
204. 626 (0.00036449207456034375)
205. 350 (0.0003588571469675093)
206. 100ls (0.0003568015050362267)
207. ventura (0.00035052586151976863)
208. celica (0.0003473851206705483)
209. sapporo (0.0003420213936211421)
210. newport (0.00034062927837360987)
211. mark (0.0003402629380177425)
212. 12 (0.0003332124313259237)
213. royale (0.00032545795440500015)
214. charger (0.0003173012054950901)
215. gremlin (0.00030902126148182505)
216. 2002 (0.0003032583621705214)
217. plymouth (0.0003010952820093363)
218. 604sl (0.0002940042075516602)
219. royal (0.00029374738198989053)
220. 504 (0.0002792313697584064)
221. bmw (0.0002788360337060003)
222. 320 (0.0002776152110544088)
223. rebel (0.0002774495469971049)
224. strada (0.00027288389555120234)
225. delta (0.00026900054477813404)
226. 88 (0.0002574910179518917)
227. chevroelt (0.00025436609597443165)
228. escort (0.00025392270248017105)
229. sx (0.00025287550566753503)
230. man (0.0002460656549029872)
231. carina (0.0002450060312576913)
232. 264gl (0.00024106493351103313)
233. x1 (0.00023721685152842815)
234. electra (0.00023212875850958526)
235. ranger (0.00022830094784139393)
236. sebring (0.00022690295668001563)
237. 124 (0.00022469345165969845)
238. tercel (0.0002163029280049272)
239. regis (0.00021596266573046011)
240. 300d (0.0002105909042520773)
241. glc (0.00020924414977481807)
242. deluxe (0.0002015104248796245)
243. salon (0.00020056501666563048)
244. fiat (0.00019906354190628216)
245. 99gle (0.00019792059846661947)
246. carlo (0.00019479349075041875)
247. v6 (0.00019367029509640167)
248. mpg (0.0001911594739921858)
249. model (0.00018016122768815094)
250. 128 (0.0001765273314254329)
251. vega (0.0001580482579352968)
252. malibu (0.00015585857008904678)
253. 500 (0.0001556943298045007)
254. zephyr (0.00015504465520757156)
255. cougar (0.00015464540361501868)
256. toyouta (0.00015218257590328093)
257. chrysler (0.00015182895665000887)
258. starfire (0.00014846194290711842)
259. 225 (0.0001444900598378759)
260. vw (0.0001402926387993275)
261. champ (0.00013473028324044773)
262. coronet (0.0001342501138446045)
263. safari (0.00013309297767526)
264. pacer (0.00013158916135857065)
265. lecar (0.00012430880252108634)
266. gs (0.00011286568384797587)
267. mustang (0.00011265494612653889)
268. bel (0.00010911453069107237)
269. horizon (0.00010552292383302249)
270. 4000 (0.00010289041566846265)
271. corona (0.00010238460442714537)
272. 1300 (0.00010020265098662577)
273. omega (8.85696524480241e-05)
274. rx (8.607245210090368e-05)
275. isuzu (8.106721774674678e-05)
276. cutlass (7.986084576140837e-05)
277. hornet (7.891780273251315e-05)
278. 1200 (7.491853450617297e-05)
279. dasher (7.43822112435414e-05)
280. ghia (7.172640477721485e-05)
281. door (7.170628259704008e-05)
282. 320i (6.916089743903923e-05)
283. gtl (6.851609951694874e-05)
284. 2300 (6.650895674851617e-05)
285. d200 (6.599607563771107e-05)
286. custom (6.51773180411194e-05)
287. futura (6.064394497659656e-05)
288. firebird (5.8273667662948626e-05)
289. sw (5.533106575103077e-05)
290. scirocco (5.377920959814423e-05)
291. country (5.099831329442585e-05)
292. suburb (4.991227840855166e-05)
293. benz (4.626001848766871e-05)
294. diplomat (4.574829401594722e-05)
295. rx3 (4.3489798089526794e-05)
296. limited (3.324011018817211e-05)
297. catalina (3.112809553723385e-05)
298. spirit (2.978749370637539e-05)
299. dodge (1.4736605968426178e-05)
300. mercedes (5.9328904773682955e-06)
[Parallel(n_jobs=1)]: Done  49 tasks       | elapsed:    0.3s
[Parallel(n_jobs=1)]: Done  50 out of  50 | elapsed:    0.3s finished

Other Examples: Time Series

Time series data will need to be encoded for a regular feedforward neural network. In a few classes we will see how to use a recurrent neural network to find patterns over time. For now, we will encode the series into input neurons.

Financial forecasting is a very popular form of temporal algorithm. A temporal algorithm is one that accepts input for values that range over time. If the algorithm supports short term memory (internal state) then ranges over time are supported automatically. If your algorithm does not have an internal state then you should use an input window and a prediction window. Most algorithms do not have an internal state. To see how to use these windows, consider if you would like the algorithm to predict the stock market. You begin with the closing price for a stock over several days:

Day 1 : $45
Day 2 : $47
Day 3 : $48
Day 4 : $40
Day 5 : $41
Day 6 : $43
Day 7 : $45
Day 8 : $57
Day 9 : $50
Day 10 : $41

The first step is to normalize the data. This is necessary whether your algorithm has internal state or not. To normalize, we want to change each number into the percent movement from the previous day. For example, day 2 would become 0.04, because there is a 4% difference between $45 and $47. Once you perform this calculation for every day, the data set will look like the following:

Day 2 : 0. 04
Day 3 : 0. 02
Day 4:−0.16
Day 5 : 0. 02
Day 6 : 0. 04
Day 7 : 0. 04
Day 8 : 0. 04
Day 9:−0.12
Day 10:−0.18

In order to create an algorithm that will predict the next day’s values, we need to think about how to encode this data to be presented to the algorithm. The encoding depends on whether the algorithm has an internal state. The internal state allows the algorithm to use the last few values inputted to help establish trends.

Many machine learning algorithms have no internal state. If this is the case, then you will typically use a sliding window algorithm to encode the data. To do this, we use the last three prices to predict the next one. The inputs would be the last three-day prices, and the output would be the fourth day. The above data could be organized in the following way to provide training data.

These cases specified the ideal output for the given inputs:

[ 0.04 , 0.02 , −0.16 ] −> 0.02
[ 0.02 , −0.16 , 0.02 ] −> 0.04
[ −0.16 , 0.02 , 0.04 ] −> 0.04
[ 0.02 , 0.04 , 0.04 ] −> 0. 26
[ 0.04 , 0.04 , 0.26 ] −> −0.12
[ 0.04 , 0.26 , −0.12 ] −> −0.18

The above encoding would require that the algorithm have three inputs and one output.


In [22]:
import numpy as np

def normalize_price_change(history):
    last = None
    
    result = []
    for price in history:
        if last is not None:
            result.append( float(price-last)/last )
        last = price

    return result

def encode_timeseries_window(source, lag_size, lead_size):
    """
    Encode raw data to a time-series window.
    :param source: A 2D array that specifies the source to be encoded.
    :param lag_size: The number of rows uses to predict.
    :param lead_size: The number of rows to be predicted
    :return: A tuple that contains the x (input) & y (expected output) for training.
    """
    result_x = []
    result_y = []

    output_row_count = len(source) - (lag_size + lead_size) + 1
    

    for raw_index in range(output_row_count):
        encoded_x = []

        # Encode x (predictors)
        for j in range(lag_size):
            encoded_x.append(source[raw_index+j])

        result_x.append(encoded_x)

        # Encode y (prediction)
        encoded_y = []

        for j in range(lead_size):
            encoded_y.append(source[lag_size+raw_index+j])

        result_y.append(encoded_y)

    return result_x, result_y


price_history = [ 45, 47, 48, 40, 41, 43, 45, 57, 50, 41 ]
norm_price_history = normalize_price_change(price_history)

print("Normalized price history:")
print(norm_price_history)

print()
print("Rounded normalized price history:")
norm_price_history = np.round(norm_price_history,2)
print(norm_price_history)


print()
print("Time Boxed(time series encoded):")
x, y = encode_timeseries_window(norm_price_history, 3, 1)

for x_row, y_row in zip(x,y):
    print("{} -> {}".format(np.round(x_row,2), np.round(y_row,2)))


Normalized price history:
[0.044444444444444446, 0.02127659574468085, -0.16666666666666666, 0.025, 0.04878048780487805, 0.046511627906976744, 0.26666666666666666, -0.12280701754385964, -0.18]

Rounded normalized price history:
[ 0.04  0.02 -0.17  0.02  0.05  0.05  0.27 -0.12 -0.18]

Time Boxed(time series encoded):
[ 0.04  0.02 -0.17] -> [ 0.02]
[ 0.02 -0.17  0.02] -> [ 0.05]
[-0.17  0.02  0.05] -> [ 0.05]
[ 0.02  0.05  0.05] -> [ 0.27]
[ 0.05  0.05  0.27] -> [-0.12]
[ 0.05  0.27 -0.12] -> [-0.18]

In [ ]: