We will try to learn from data using a very simple example of tossing a coin. We will first generate some data (30% heads and 70% tails) and will try to learn the CPD of the coin using Maximum Likelihood Estimator and Bayesian Estimator with Dirichlet prior.
In [3]:
# Generate data
import numpy as np
import pandas as pd
raw_data = np.array([0] * 30 + [1] * 70) # Representing heads by 0 and tails by 1
data = pd.DataFrame(raw_data, columns=['coin'])
print(data)
In [5]:
# Defining the Bayesian Model
from pgmpy.models import BayesianModel
from pgmpy.estimators import MaximumLikelihoodEstimator, BayesianEstimator
model = BayesianModel()
model.add_node('coin')
# Fitting the data to the model using Maximum Likelihood Estimator
model.fit(data, estimator_type=MaximumLikelihoodEstimator)
print(model.get_cpds('coin'))
In [6]:
# Fitting the data to the model using Bayesian Estimator with Dirichlet prior with equal pseudo counts.
model.fit(data, estimator_type=BayesianEstimator, prior_type='dirichlet', pseudo_counts={'coin': [50, 50]})
print(model.get_cpds('coin'))
We can see that we get the results as expected. In the maximum likelihood case we got the probability just based on the data where as in the bayesian case we had a prior of $ P(H) = 0.5 $ and $ P(T) = 0.5 $, therefore with 30% heads and 70% tails in the data we got a posterior of $ P(H) = 0.4 $ and $ P(T) = 0.6 $.
Similarly we can learn in case of more complex model. Let's take an example of the student model and compare the results in case of Maximum Likelihood estimator and Bayesian Estimator.
In [14]:
# Generating radom data with each variable have 2 states and equal probabilities for each state
import numpy as np
import pandas as pd
raw_data = np.random.randint(low=0, high=2, size=(1000, 5))
data = pd.DataFrame(raw_data, columns=['D', 'I', 'G', 'L', 'S'])
print(data)
In [15]:
# Defining the model
from pgmpy.models import BayesianModel
from pgmpy.estimators import MaximumLikelihoodEstimator, BayesianEstimator
model = BayesianModel([('D', 'G'), ('I', 'G'), ('I', 'S'), ('G', 'L')])
# Learing CPDs using Maximum Likelihood Estimators
model.fit(data, estimator_type=MaximumLikelihoodEstimator)
for cpd in model.get_cpds():
print("CPD of {variable}:".format(variable=cpd.variable))
print(cpd)
As the data was randomly generated with equal probabilities for each state we can see here that all the probability values are close to 0.5 which we expected. Now coming to the Bayesian Estimator:
In [17]:
# Learning with Bayesian Estimator using dirichlet prior for each variable.
pseudo_counts = {'D': [300, 700], 'I': [500, 500], 'G': [800, 200], 'L': [500, 500], 'S': [400, 600]}
model.fit(data, estimator_type=BayesianEstimator, prior_type='dirichlet', pseudo_counts=pseudo_counts)
for cpd in model.get_cpds():
print("CPD of {variable}:".format(variable=cpd.variable))
print(cpd)
Since the data was randomly generated with equal probabilities for each state, the data tries to bring the posterior probabilities close to 0.5. But because of the prior we will get the values in between the prior and 0.5.