In [1]:
# %load ../../preconfig.py
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
plt.rcParams['axes.grid'] = False
#import numpy as np
#import pandas as pd
#import sklearn
#import itertools
import logging
logger = logging.getLogger()
def show_image(filename, figsize=None, res_dir=True):
if figsize:
plt.figure(figsize=figsize)
if res_dir:
filename = './res/{}'.format(filename)
plt.imshow(plt.imread(filename))
All algorithms for analysis of data are designed to produce a useful summary of the data, from which decisions are made.
"machine learning" not only summarize our data; they are perceived as learning a model or classifier from the data, and thus discover something about data that will be seen in the future.
Decision trees suitable for binary and multiclass classification small features
Perceptrons $\sum w x \geq \theta$ binary classification, very large features
Neural nets binary or multiclass classification
Instance-based learning use the entire training set to represent the function $f$. eg: k-nearest-neighbor
Support-vector machines accureate on unseen data
In [2]:
#exercise
Initialize $w = 0$.
Pick a learning-rate parameter $\eta > 0$.
Consider each training example $t = (x, y)$ in turn.
In [2]:
show_image('fig12_5.png')
data point are not linearly separable $\to$ loop infinitely.
some common tests for termination:
Terminate after a fixed number of rounds.
Terminate when the number of misclassified training points stops changing.
Terminate when the number of errors on the test set stops changing.
Another technique that will aid convergence is to lower the traing rate as the number of rounds increases. eg: $\eta = \eta / (1 + ct)$.
Winnow assumes that the feature vectors consist of 0's and 1's, and the labels are $+1$ or $-1$. And Winnow produce only positive weithts $w$.
idea: there is a positive threshold $\theta$.
if $w x > \theta$ and $y = +1$, or $w x \theta$ and $y = -1$, then do nothing.
if $w x \leq \theta$, but $y = +1$, then the weights for the components where $x$ has 1 are too low as a group, increase them by a factor, say 2.
if $w x \geq \theta$, but $y = -1$, then the weights for the components where $x$ has 1 are too high as a group, decreae them by a factor, say 2.
At the cost of adding another dimension to the feature vectors, we can treate $\theta$ as one of the components of $w$.
$w' = [w \theta]$.
$x' = [x -1]$.
We allow a $-1$ in $x'$ for $\theta$ if we treat it in the manner opposite to the way we treat components that are 1.
In [3]:
show_image('fig12_10.png')
In [4]:
show_image('fig12_11.png')
In [5]:
show_image('fig12_12.png')
In [7]:
#Exercise
In [ ]: