In the workshop for the previous week, you had to select a data set from the UCI Machine Learning Repository and based on the recommended analysis type, wrangle the data into a fitted model, showing some model evaluation. In particular:
When complete, I will review your code, so please submit your code via pull-request to the Introduction to Machine Learning with Scikit-Learn repository!
Downloaded from the UCI Machine Learning Repository on February 26, 2015. The first thing is to fully describe your data in a README file. The dataset description is as follows:
The examined group comprised kernels belonging to three different varieties of wheat: Kama, Rosa and Canadian, 70 elements each, randomly selected for the experiment. High quality visualization of the internal kernel structure was detected using a soft X-ray technique. It is non-destructive and considerably cheaper than other more sophisticated imaging techniques like scanning microscopy or laser technology. The images were recorded on 13x18 cm X-ray KODAK plates. Studies were conducted using combine harvested wheat grain originating from experimental fields, explored at the Institute of Agrophysics of the Polish Academy of Sciences in Lublin.
The data set can be used for the tasks of classification and cluster analysis.
To construct the data, seven geometric parameters of wheat kernels were measured:
All of these parameters were real-valued continuous.
M. Charytanowicz, J. Niewczas, P. Kulczycki, P.A. Kowalski, S. Lukasik, S. Zak, 'A Complete Gradient Clustering Algorithm for Features Analysis of X-ray Images', in: Information Technologies in Biomedicine, Ewa Pietka, Jacek Kawa (eds.), Springer-Verlag, Berlin-Heidelberg, 2010, pp. 15-24.
In this section we will begin to explore the dataset to determine relevant information.
In [1]:
%matplotlib notebook
import os
import json
import time
import pickle
import requests
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
In [2]:
URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/00236/seeds_dataset.txt"
def fetch_data(fname='seeds_dataset.txt'):
"""
Helper method to retreive the ML Repository dataset.
"""
response = requests.get(URL)
outpath = os.path.abspath(fname)
with open(outpath, 'wb') as f:
f.write(response.content)
return outpath
# Fetch the data if required
DATA = fetch_data()
In [3]:
FEATURES = [
"area",
"perimeter",
"compactness",
"length",
"width",
"asymmetry",
"groove",
"label"
]
LABEL_MAP = {
1: "Kama",
2: "Rosa",
3: "Canadian",
}
# Read the data into a DataFrame
df = pd.read_csv(DATA, sep='\s+', header=None, names=FEATURES)
# Convert class labels into text
df["label"] = df["label"].map(LABEL_MAP)
# Describe the dataset
print(df.describe())
In [4]:
# Determine the shape of the data
print("{} instances with {} features\n".format(*df.shape))
# Determine the frequency of each class
print(df.groupby('label')['label'].count())
In [5]:
from sklearn.preprocessing import LabelEncoder
# Extract our X and y data
X = df[FEATURES[:-1]]
y = df["label"]
# Encode our target variable
encoder = LabelEncoder().fit(y)
y = encoder.transform(y)
print(X.shape, y.shape)
In [6]:
# Create a scatter matrix of the dataframe features
from pandas.plotting import scatter_matrix
scatter_matrix(X, alpha=0.2, figsize=(8, 8), diagonal='kde')
plt.show()
In [9]:
from yellowbrick.features import ParallelCoordinates
oz = ParallelCoordinates(classes=encoder.classes_, normalize='standard').fit(X, y)
_ = oz.show()
In [10]:
from yellowbrick.features import RadViz
oz = RadViz(classes=encoder.classes_, alpha=0.35).fit(X, y)
_ = oz.show()
One way that we can structure our data for easy management is to save files on disk. The Scikit-Learn datasets are already structured this way, and when loaded into a Bunch
(a class imported from the datasets
module of Scikit-Learn) we can expose a data API that is very familiar to how we've trained on our toy datasets in the past. A Bunch
object exposes some important properties:
n_samples
* n_features
n_samples
Note: This does not preclude database storage of the data, in fact - a database can be easily extended to load the same Bunch
API. Simply store the README and features in a dataset description table and load it from there. The filenames property will be redundant, but you could store a SQL statement that shows the data load.
IMPORTANT: for the below code to work you need to unzip
wheat.zip
in thedata
folder at the top level of the repository.
In order to manage our data set on disk, we'll structure our data as follows:
In [13]:
from sklearn.datasets.base import Bunch
DATA_DIR = os.path.abspath(os.path.join( "..", "data", "wheat"))
# Show the contents of the data directory
for name in os.listdir(DATA_DIR):
if name.startswith("."): continue
print("- {}".format(name))
In [14]:
def load_data(root=DATA_DIR):
# Construct the `Bunch` for the wheat dataset
filenames = {
'meta': os.path.join(root, 'meta.json'),
'rdme': os.path.join(root, 'README.md'),
'data': os.path.join(root, 'seeds_dataset.txt'),
}
# Load the meta data from the meta json
with open(filenames['meta'], 'r') as f:
meta = json.load(f)
target_names = meta['target_names']
feature_names = meta['feature_names']
# Load the description from the README.
with open(filenames['rdme'], 'r') as f:
DESCR = f.read()
# Load the dataset from the text file.
dataset = np.loadtxt(filenames['data'])
# Extract the target from the data
data = dataset[:, 0:-1]
target = dataset[:, -1]
# Create the bunch object
return Bunch(
data=data,
target=target,
filenames=filenames,
target_names=target_names,
feature_names=feature_names,
DESCR=DESCR
)
# Save the dataset as a variable we can use.
dataset = load_data()
print(dataset.data.shape)
print(dataset.target.shape)
In [15]:
def get_internal_params(model):
for attr in dir(model):
if attr.endswith("_") and not attr.startswith("_"):
print(attr, getattr(model, attr))
In [16]:
from sklearn.tree import DecisionTreeClassifier
In [17]:
model = DecisionTreeClassifier()
get_internal_params(model)
In [18]:
model.fit(X,y)
get_internal_params(model)
In [19]:
from sklearn.linear_model import LogisticRegression
model = LogisticRegression().fit(X, y)
get_internal_params(model)
In [20]:
from sklearn import metrics
from sklearn.model_selection import KFold
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
In [21]:
def fit_and_evaluate(X, y, model, label, **kwargs):
"""
Because of the Scikit-Learn API, we can create a function to
do all of the fit and evaluate work on our behalf!
"""
start = time.time() # Start the clock!
scores = {'precision':[], 'recall':[], 'accuracy':[], 'f1':[]}
kf = KFold(n_splits = 12, shuffle=True)
for train, test in kf.split(X, y):
X_train, X_test = X.iloc[train], X.iloc[test]
y_train, y_test = y[train], y[test]
estimator = model(**kwargs)
estimator.fit(X_train, y_train)
expected = y_test
predicted = estimator.predict(X_test)
# Append our scores to the tracker
scores['precision'].append(metrics.precision_score(expected, predicted, average="weighted"))
scores['recall'].append(metrics.recall_score(expected, predicted, average="weighted"))
scores['accuracy'].append(metrics.accuracy_score(expected, predicted))
scores['f1'].append(metrics.f1_score(expected, predicted, average="weighted"))
# Report
print("Build and Validation of {} took {:0.3f} seconds".format(label, time.time()-start))
print("Validation scores are as follows:\n")
print(pd.DataFrame(scores).mean())
# Write official estimator to disk
estimator = model(**kwargs)
estimator.fit(X, y)
outpath = label.lower().replace(" ", "-") + ".pickle"
with open(outpath, 'wb') as f:
pickle.dump(estimator, f)
print("\nFitted model written to:\n{}".format(os.path.abspath(outpath)))
In [22]:
# Perform SVC Classification
fit_and_evaluate(X, y, SVC, "Wheat SVM Classifier", gamma = 'auto')
In [23]:
# Perform kNN Classification
fit_and_evaluate(X, y, KNeighborsClassifier, "Wheat kNN Classifier", n_neighbors=12)
In [24]:
# Perform Random Forest Classification
fit_and_evaluate(X, y, RandomForestClassifier, "Wheat Random Forest Classifier")