Generative Model
This article is my notes on generative model for Lecture 5 and 6 of Machine Learning by Andrew Ng. What we do in logistic regression using generalized linear model is that, we approximate \(P(y|x)\) using given data. This kind of learning algorithms is discriminative, in which we predict \(y\) based on the input features \(x\). On the contrary, generative model is to model \(P(x|y)\), the probability of the features \(x\) given class \(y\). In other words, we want to study how the features structure looks like given a class \(y\). If we also learn what \(P(y)\) is, we can easily recover \(P(y|x)\), for example, in the binary classification problem,
where \(P(x) = P(x|y=0)P(y=0) + P(x|y=1)P(y=1)\).
In this article, we are going to see a simple example of generative model on Gaussian discriminant analysis and Naive Bayes.
Gaussian Discriminant Analysis
Assume \(x \in \mathbb{R}^n\). In Gaussian discriminant analysis, we assume \(P(x|y)\) is Gaussian, i.e.,
where \(\mu\) is the mean and \(\Sigma\) is the covariance matrix.
Let's apply Gaussian discriminant anlysis to binary classifiaction problems, to get a generative model for prediction. In this model, we have
where \(\phi, \mu_0, \mu_1\) and \(\Sigma\) are parameters for which the log-likelihood function
is maximized. Here \(P(x^{(i)}, y^{(i)})\) is the joint probability and \(\{(x^{(i)}, y^{(i)})\}_{i=1}^m\) is the set of sample points.
When solved the maximization problem of \(l(\phi, \mu_0, \mu_1, \Sigma)\), we get the parameters for the generative model:
Once all parameters are known, we can then predict \(y\) based on a given \(x\).
In my previous article Generalized Linear Model(Examples), I built a logistic regression for good or bad field goals with yardage as the only feature. Now, I am going to use the above generative model to predict \(P(y|x)\). In this simple example, \(n=1\), so
import requests
import re
import matplotlib.pyplot as plt
import numpy as np
# retrive data online
fieldgoal_url = 'http://www.stat.ufl.edu/~winner/data/fieldgoal.dat'
response = requests.get(fieldgoal_url)
# extract sample points
data_pat = '(\d+)'
data = map(int, re.findall(data_pat, response.content))
x, y = data[::3], data[1::3]
# size of the sample set
m = len(x)
# seperate x based on whether the field goal is good or not
good_x = [x[i] for i in range(m) if y[i]==1]
bad_x = [x[i] for i in range(m) if y[i]==0]
plt.hist(good_x, 30, normed=30, color='green', alpha=0.75)
plt.hist(bad_x, 30, normed=30, color='red', alpha=0.75)
plt.show()
# parameters
phi = len(good_x) / float(m)
mu = [0, 0]
mu[0] = sum(bad_x) / float(len(bad_x))
mu[1] = sum(good_x) / float(len(good_x))
sigma = sum((x[i]-mu[y[i]]) ** 2 for i in range(m)) / float(m)
print(phi, mu[0], mu[1], sigma)
(0.7974683544303798, 43.630208333333336, 34.69444444444444, 82.28603529360076)
Gaussian discriminant analysis assumes that in each class, the features \(x\) are normal distributed. Below is a piece of codes to show the distributions of features.
from scipy.stats import norm
ax = plt.subplot(1, 2, 1)
plt.title('Good field goals')
plt.hist(good_x, 30, normed=30, color='green', alpha=0.65)
u = np.arange(15, 65, 0.1)
v = map(lambda t: norm.pdf(t, mu[1], np.sqrt(sigma)), u)
plt.plot(u, v, 'g', linewidth=2)
plt.subplot(1, 2, 2, sharey=ax)
plt.title('Bad field goals')
plt.hist(bad_x, 30, normed=30, color='red', alpha=0.65)
u = np.arange(15, 65, 0.1)
v = map(lambda t: norm.pdf(t, mu[0], np.sqrt(sigma)), u)
plt.plot(u, v, 'r', linewidth=2)
fig = plt.gcf()
fig.set_size_inches(16, 6)
plt.show()
We see from the above figure that features \(x\) roughly fit the Gaussian distributions, but not the well. In order to compare this model to previous logistic regression, let's calculate \(P(y=1|x)\) from the parameters. By Equation (\(\ref{eqn:bayes}\)),
Recall that in the previous logistic regression, we used gradient ascent to obtain
And logistic regression gives us a prediction:
# generative(x) = P(y=1|x)
def generative(x):
good = norm.pdf(x, mu[1], np.sqrt(sigma))
bad = norm.pdf(x, mu[0], np.sqrt(sigma))
return phi*good / (phi*good+(1-phi)*bad)
# logistic(x) is the prediction from logistic progression
a, b = (-0.12228007960331008, 6.237420310313901)
def logistic(x, a, b):
eta = a*x + b
return 1.0/(1 + np.exp(-eta))
# calculate the observed probability
import collections
gxc = collections.Counter(good_x)
bxc = collections.Counter(bad_x)
min_x = min(x)
max_x = max(x)
observation = []
for i in range(min_x, max_x+1):
total = gxc[i] + bxc[i]
total = 1.0 if total == 0 else float(total)
observation.append(gxc[i]/total)
# plot observation
plt.plot(range(min_x, max_x+1), observation, 'ro')
# plot prediction from generative model
u = np.arange(min_x, max_x, 0.1)
v = map(generative, u)
plt.plot(u, v, 'g', linewidth=2, label='generative')
# plot prediction from logistic regression
v = map(lambda t: logistic(t, a, b), u)
plt.plot(u, v, 'b', linewidth=2, label='logistic')
fig = plt.gcf()
fig.set_size_inches(12, 8)
plt.legend(loc='upper right', prop={'size': 12})
plt.show()
These two method give a very similar prediction! Why is that? Let's come back to Equation (\(\ref{eqn:prediction}\)).
And now put in the formulas (\(\ref{eqn:simple}\)) for \(P(x|y=0)\) and \(P(x|y=1)\),
Therefore,
Indeed Gaussian discriminant analysis will imply logistic regression. We can then put the parameters from Gaussian discriminant analysis in the form of logistic regression.
A = (mu[1] - mu[0]) / sigma
B = (mu[0]**2 - mu[1]**2) / (2*sigma) - np.log((1-phi)/phi)
print('Parameters from Gaussian discriminant analysis:\n{}'.format((A, B)))
print('Parameters from gradient ascent:\n{}'.format((a, b)))
Parameters from Gaussian discriminant analysis: (-0.10859392917650769, 5.6233369024140298) Parameters from gradient ascent: (-0.12228007960331008, 6.237420310313901)
gau_abs_err = gra_abs_err = 0
for i, t in enumerate(range(min_x, max_x)):
gau_abs_err += abs(logistic(t, A, B)-observation[i])
gra_abs_err += abs(logistic(t, a, b)-observation[i])
print('Absolute error of Gaussian discriminant analysis: {}'.format(gau_abs_err))
print('Absolute error of gradien ascent: {}'.format(gra_abs_err))
Absolute error of Gaussian discriminant analysis: 4.41501707618 Absolute error of gradien ascent: 4.42931961272
In this example, Gaussian discriminant analysis is a little bit better than the logisitic regression from gradient ascent. Another advantage of Gaussian discriminant analysis, or in general that of generative analysis, is that we don't need many sample points to get a good prediction. This is because we have strong assumptions on the structure of the features.
We see from above calculation that Gaussian discriminant analysis implies logistic regression. A similar calculation can show that any exponential family of distributions will aslo imply logistic regression.
Naive Bayes
In a generative model, we would like to model \(P(x|y)\). Suppose that there are \(n\) features \(x = (x_1, x_2, ..., x_n)\), then the Naive Bayes assumption simply assume that all \(x_i\) are conditionally independent. Therefore,
where the second equality comes from chain rule of probability, and the last equality comes from the assumption that all \(x_i\) are conditionally independent.
Spam filter
Suppose we want to make a spam filter so that given an email, it can decide whether or not the email is a spam. To start with, we need a dictionary of words as features. For example, given a training set of emails, we can take all words that appear as the list of features. Suppose the list of words are \({w_1, w_2, \dots, w_n}\), and for each email, we convert it into a \(n \times 1\) vector
where \(x_i \in \{0, 1\}\) indicates whether word \(w_i\) appears in the email. After conversion, the training set becomes
We impose naive Bayes assumption on the features \(x\). In reality, this is not a correct assumption. For example, if the word "reminder" appears in the email, most likely, the word "tomorrow" will also appear in it. Even though naive Bayes assumption neglects these obvious dependence, it can provide a pretty good spam filter.
With navie Bayes assumption,
so the parameters needed for the models are
for \(j = 1, 2, \dots, n\). These parameters will be determined by maximizing the joint likelihood function
The maximum estimate of the joint likelihood function gives
Here \(1\{\mathrm{proposition}\}\) is an indicator function which values \(1\) if the proposition is true and \(0\) otherwise.
Once all parameters are known, we can calculate \(P(y=1|x)\) using Bayes rule as before
However, here is a problem with the model. Suppose the k-th word \(w_k\) never appears in the training set of emails, then based on the formula (\(\ref{eqn:naive_parameters}\)),
Then when we try to use this model to predict whether an email containing the word \(w_k\) is a spam, a '0/0' error will be encoutered. To solve this issue, we can apply Laplace smoothing to get
One variation
In the above model for spam filter, feature \(x_i \in \{0, 1\}\) captures whether or not the word \(w_i\) appears in the email. We can easily generalize this model to other data sets, with feature \(x_i\) taking values in \(\{1, 2, \dots, k\}\). The only difference is that instead of Bernoulli distribution, we now have to use multinomial distribution for \(P(x_i|y)\).
another variation
In the spam filter above, feature \(x\) only captures whether or not words in the list appear in the emails. It ignores the numbers of times of words appearing in the emails. We are going to introduce another model which respects the numbers of times of words in the emails. For a given email with \(l\) words, we let
to be the features of this email, where \(x_i\) represents the index in the dictionaly of the i-th word in the email. That is to say, \(w_{x_i}\) is the i-th word in the email.
The joint distribution in this model is
Warning: the lenght \(l\) for each email varies!
The parameters for the models are
where \(k=1, 2, \dots, n\) and \(P(x_j=k)\) is the probability of the j-th word of the email being \(w_k\). The likelihood function in the model is
where \(l_i\) here is the length of the i-th email.
Given a training set, maximum likelihood estimate and Laplace smoothing together will give us
This model is very suitable for textures data.