Exercícios

1 - Aplique os algoritmos K-means [1] e AgglomerativeClustering [2], experimentando as medidas de distância disponíveis. Compare os resultados, utilizando métricas de avaliação de clusteres (completeness e homogeneity, por exemplo) [3].

2 - Aplique o Método da Silhueta (Utilize silhouette_score [4] e/ou silhouette_samples [5]) em relação aos resultados do algoritmo K-means. Observe o resultado da avaliação (entenda as saídas dos métodos, conforme referências abaixo) e identifique a qualidade da sua clusterização.

3 - Qual o valor de K (número de clusteres) você escolheu para a questão anterior? Desenvolva o Método do Cotovelo (não utilizar lib!) e descubra o K mais adequado. Após descobrir, aplique novamente o K-means com o K adequado.

4 - Após a questão 3, você aplicou o algoritmo com K apropriado. Refaça a questão 2 de acordo com os resultados de clusters obtidos com a questão anterior e verifique se o resultado melhorou.


In [18]:
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn import decomposition
from sklearn import metrics
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd

dataset = load_iris()
X = dataset.data
y = dataset.target

def PlotOriginal(reduced_data_df, y):
    fig, ax = plt.subplots()
    ax.scatter(reduced_data_df[0], reduced_data_df[1], c=y)
    plt.show()
    return

def PlotClusters(reduced_data_df, clustering):
    fig, ax = plt.subplots()
    ax.scatter(reduced_data_df[0], reduced_data_df[1], c=clustering)
    plt.show()

In [19]:
#KMeans
kmeans = KMeans(n_clusters=3, random_state=0).fit(X)
predictions = kmeans.predict(X)

reduced_data = decomposition.PCA(n_components=2).fit_transform(X)

reduced_data_df = pd.DataFrame(reduced_data)
reduced_data_df[2] = predictions

# k-means clusters
PlotClusters(reduced_data_df, reduced_data_df[2])

# Original
PlotOriginal(reduced_data_df, y)



In [20]:
#AglomerativeClustering - Euclidean - Complete Linkage
from sklearn.cluster import AgglomerativeClustering

agglo = AgglomerativeClustering(n_clusters=3, affinity="euclidean", linkage="complete").fit_predict(X)


# Agglomerative clusters
PlotClusters(reduced_data_df, agglo)

# Original
PlotOriginal(reduced_data_df, y)



In [21]:
#AglomerativeClustering - Manhattan - Complete Linkage
from sklearn.cluster import AgglomerativeClustering

agglo_Manhattan = AgglomerativeClustering(n_clusters=3, affinity="manhattan", linkage="complete").fit_predict(X)


# Agglomerative clusters
PlotClusters(reduced_data_df, agglo_Manhattan)

# Original
PlotOriginal(reduced_data_df, y)



In [22]:
#AglomerativeClustering - Cosine - Complete Linkage
from sklearn.cluster import AgglomerativeClustering

agglo_Cosine = AgglomerativeClustering(n_clusters=3, affinity="cosine", linkage="complete").fit_predict(X)


# Agglomerative clusters
PlotClusters(reduced_data_df, agglo_Cosine)

# Original
PlotOriginal(reduced_data_df, y)



In [23]:
#Metrics

#Kmeans
metric_Kmeans_homo = metrics.homogeneity_score(y, reduced_data_df[2])
metric_Kmeans_comp = metrics.completeness_score(y, reduced_data_df[2]) 

#Agglomerative - Euclidean
metric_AggloEuclidean_homo = metrics.homogeneity_score(y, agglo)
metric_AggloEuclidean_comp = metrics.completeness_score(y, agglo) 

#Agglomerative - Manhattan
metric_AggloManhattan_homo = metrics.homogeneity_score(y, agglo_Manhattan)
metric_AggloManhattan_comp = metrics.completeness_score(y, agglo_Manhattan) 

#Agglomerative - Cosine
metric_AggloCosine_homo = metrics.homogeneity_score(y, agglo_Cosine)
metric_AggloCosine_comp = metrics.completeness_score(y, agglo_Cosine) 

print ("K-Means Metrics \n Homogeneity:", metric_Kmeans_homo, "\n Completeness:", metric_Kmeans_comp, "\n")
print ("Agglomerative Clustering Metrics - Euclidean \n Homogeneity:", metric_AggloEuclidean_homo, "\n Completeness:", metric_AggloEuclidean_comp, "\n")
print ("Agglomerative Clustering Metrics - Manhattan \n Homogeneity:", metric_AggloManhattan_homo, "\n Completeness:", metric_AggloManhattan_comp, "\n")
print ("Agglomerative Clustering Metrics - Cosine \n Homogeneity:", metric_AggloCosine_homo, "\n Completeness:", metric_AggloCosine_comp)


K-Means Metrics 
 Homogeneity: 0.751485402199 
 Completeness: 0.764986151449 

Agglomerative Clustering Metrics - Euclidean 
 Homogeneity: 0.700115437096 
 Completeness: 0.745438275302 

Agglomerative Clustering Metrics - Manhattan 
 Homogeneity: 0.778176865951 
 Completeness: 0.803588540623 

Agglomerative Clustering Metrics - Cosine 
 Homogeneity: 0.717058965034 
 Completeness: 0.773421193274

In [38]:
#Metodo do Cotovelo
from scipy.spatial.distance import cdist

distortions = []
K = range(1,10)
for k in K:
    kmeanModel = KMeans(n_clusters=k).fit(X)
    kmeanModel.fit(X)
    distortions.append(sum(np.min(cdist(X, kmeanModel.cluster_centers_, 'euclidean'), axis=1)) / X.shape[0])

# Plot the elbow
plt.plot(K, distortions, 'ro-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()



In [32]:
#K-means com o optimal K = 2
kmeansOptimal = KMeans(n_clusters=2).fit(X)
predictions = kmeansOptimal.predict(X)

# k-means clusters
PlotClusters(reduced_data_df, predictions)

# Original
PlotOriginal(reduced_data_df, y)



In [ ]: