Assessing the quality of crowdsourced data in CollabMap from their provenance.
In this notebook, we explore the performance of classification using the provenance of a data entity instead of its dependencies (as shown here and in the paper). In order to distinguish between the two, we call the former historical provenance and the latter forward provenance. Apart from using the historical provenance, all other steps are the same as the original experiments.
The CollabMap dataset based on historical provenance is provided in the collabmap/ancestor-graphs.csv
file, each row corresponds to a building, route, or route sets created in the application:
id
: the identifier of the data entity (i.e. building/route/route set).trust_value
: the beta trust value calculated from the votes for the data entity.
In [1]:
import pandas as pd
In [2]:
df = pd.read_csv("collabmap/ancestor-graphs.csv", index_col='id')
df.head()
Out[2]:
In [3]:
df.describe()
Out[3]:
In [4]:
trust_threshold = 0.75
df['label'] = df.apply(lambda row: 'Trusted' if row.trust_value >= trust_threshold else 'Uncertain', axis=1)
df.head() # The new label column is the last column below
Out[4]:
Having used the trust valuue to label all the data entities, we remove the trust_value
column from the data frame.
In [5]:
# We will not use trust value from now on
df.drop('trust_value', axis=1, inplace=True)
df.shape # the dataframe now have 23 columns (22 metrics + label)
Out[5]:
In [6]:
df_buildings = df.filter(like="Building", axis=0)
df_routes = df.filter(regex="^Route\d", axis=0)
df_routesets = df.filter(like="RouteSet", axis=0)
df_buildings.shape, df_routes.shape, df_routesets.shape # The number of data points in each dataset
Out[6]:
This section explore the balance of each of the three datasets and balance them using the SMOTE Oversampling Method.
In [7]:
from analytics import balance_smote
In [8]:
df_buildings.label.value_counts()
Out[8]:
Balancing the building dataset:
In [9]:
df_buildings = balance_smote(df_buildings)
In [10]:
df_routes.label.value_counts()
Out[10]:
Balancing the route dataset:
In [11]:
df_routes = balance_smote(df_routes)
In [12]:
df_routesets.label.value_counts()
Out[12]:
Balancing the route set dataset:
In [13]:
df_routesets = balance_smote(df_routesets)
We now run the cross validation tests on the three balanaced datasets (df_buildings
, df_routes
, and df_routesets
) using all the features (combined
), only the generic network metrics (generic
), and only the provenance-specific network metrics (provenance
). Please refer to Cross Validation Code.ipynb for the detailed description of the cross validation code.
In [14]:
from analytics import test_classification
We test the classification of buildings, collect individual accuracy scores results
and the importance of every feature in each test in importances
(both are Pandas Dataframes). These two tables will also be used to collect data from testing the classification of routes and route sets later.
In [15]:
# Cross validation test on building classification
res, imps = test_classification(df_buildings)
# adding the Data Type column
res['Data Type'] = 'Building'
imps['Data Type'] = 'Building'
# storing the results and importance of features
results = res
importances = imps
# showing a few newest rows
results.tail()
Out[15]:
In [16]:
# Cross validation test on route classification
res, imps = test_classification(df_routes)
# adding the Data Type column
res['Data Type'] = 'Route'
imps['Data Type'] = 'Route'
# storing the results and importance of features
results = results.append(res, ignore_index=True)
importances = importances.append(imps, ignore_index=True)
# showing a few newest rows
results.tail()
Out[16]:
In [17]:
# Cross validation test on route classification
res, imps = test_classification(df_routesets)
# adding the Data Type column
res['Data Type'] = 'Route Set'
imps['Data Type'] = 'Route Set'
# storing the results and importance of features
results = results.append(res, ignore_index=True)
importances = importances.append(imps, ignore_index=True)
# showing a few newest rows
results.tail()
Out[17]:
In [20]:
%matplotlib inline
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("paper", font_scale=1.4)
Converting the accuracy score from [0, 1] to percentage, i.e [0, 100]:
In [21]:
results.Accuracy = results.Accuracy * 100
results.head()
Out[21]:
In [22]:
from matplotlib.font_manager import FontProperties
fontP = FontProperties()
fontP.set_size(12)
In [34]:
pal = sns.light_palette("seagreen", n_colors=3, reverse=True)
plot = sns.barplot(x="Data Type", y="Accuracy", hue='Metrics', palette=pal, errwidth=1, capsize=0.02, data=results)
plot.set_ylim(40, 90)
plot.legend(loc='upper center', bbox_to_anchor=(0.5, 1.0), ncol=3)
plot.set_ylabel('Accuracy (%)')
Out[34]:
Results: We cannot use the provenance of buildings in our approach for assessing their quality because they are all the same in terms of the topology of their historical provenance graphs. There are small correlations between the provenance of routes/route sets and their quality. However, as shown above, the decision tree classifier's accuracy in predicting their quality is very low, 61% and 63%, compared with 97% and 96% while using the dependency graphs (or the forward provenance). Note that the baseline accuracy for random selection, in this application, is 50%.