# Generalized Linear Model

This article on *Generalized Linear Model (GLM)* is based on the first four lectures of Machine Learning by Andrew Ng. But the structure of the article is quite different from the lecture. I will talk about exponential family of distributions first. Then I will discuss the general idea of GLM. Finally, I will try to derive some well known learning algorithms from GLM.

## Exponential Family

\(\eta\) is called the natural parameter and \(T(y)\) is called the sufficient statistic.

**Example 1.** Consider a family of normal distributions \(P(y; \mu)\) with unknown mean \(\mu\) and known variance \(\sigma^2\). Then

We can rewrite \(P(y; \mu)\) in the following form:

Therefore, we can set

**Example 2.** Consider a family of Bernoulli distributions parametrized by \(\phi\). Then

Let's rewrite it in the exponential form:

Compare the above result with the definition of exponential family, we get

We notice that \(a(\eta)\) is not in terms of \(\eta\). We need to express \(\phi\) in terms of \(\eta\). From \(\eta = \log\left(\frac{\phi}{1-\phi}\right)\), we solve

So,

## Generalized Linear Model

In machine learning, we are often given a large samples set \(S = \{(x^{(i)}, y^{(i)}): i = 1, \dots, m\}\) and our task is to come up with some learning algorithm \(h_\theta(x)\) depending on \(\theta\) such that \(h_\theta(x)\) models \(S\) well in certain sense. GLM is one powerful machinery which gives us a way to find reasonably good \(h_\theta(x)\).

GLM has three assumptions:

(1) Given input \(x\) and learning parameter \(\theta\), the output \(y|x, \theta\) is distributed in an exponential family \(P(y; \eta)\) for some natural parameter \(\eta\).

(2) \(h_\theta(x) = E(T(y)|x; \theta)\), the expected value of \(T(y)\) given \(x\).

(3) \(\eta = \theta^T x\). (In case \(\eta\) is a vector, assume \(\eta_i = \theta_i^T x\).)

**Remark.** The exponential family varies as learning problem varies. For example, if we want to model the number of people visiting a certain website according to time, we should use Poisson distribution. The nature of the problem usually determine the exponential family we should use. The second assumption is the one that give us the learning algorithm, which is \(h_\theta(x)\). However, one thing to keep in mind is that, the algorithm predicts \(T(y)\) but not \(y\). The third and the last assumption is the design choice in GLM. I guess this is the reason why this model is called generalized *linear* model.

Once we decide what kind of exponential family we should use, we can derive the learning algorithm \(h_\theta(x)\) from GLM. But how to determine \(\theta\)? One of the answers is to use maximum likelihood estimation. Let's dive into the idea of maximum likelihood estimation.

The chosen exponential family \(P(y; \eta)\) or \(P(y; x, \theta)\) is a probability density function of \(y\) in terms of \(x\) and \(\theta\). Let's fix \(\theta\) for now. Then \(P(y; x, \theta)\) is a probability density function of \(y\) in terms of \(x\). Given a sample point \((x^{(i)}, y^{(i)})\), \(P(y^{(i)}; x^{(i)}, \theta)\) is the *relative* likelihood of \(h_\theta(x^{(i)})\) being \(y^{(i)}\), measuring how good our learning algorithm \(h_\theta\) at the \(i\)-th sample point. We must be aware that \(P(y^{(i)}; x^{(i)}, \theta)\) is the *relative* likelihood, and the *absolute* likelihood of \(h_\theta(x^{(i)})\) being \(y^{(i)}\) is *always* 0. Using the *relative* likelihood, we can define a *likelihood function* of \(\theta\):

Therefore, the larger \(L(\theta)\) is, the better our learning algorithm \(h_\theta(x)\) is. Hence, to find the best learning parameter \(\theta\), we need to maximize \(L(\theta)\). Equivalently, we need to maximize the *log-likelihood* function

## Linear regression

Suppose the chosen exponential family \(P(y; \mu)\) for GLM is a family of normal distributions parametrized by mean \(\mu\) with fix variant \(\sigma^2\):

From Example 1 above,

By GLM, we have

By the remark of GLM above, \(h_\theta(x)\) predicts \(T(y)=\sigma y\), so the model predicting \(y\) is the following, which I denote it also \(h_\theta(x)\), since \(\sigma\) can be absorbed in \(\theta\).

From Equation (\ref{eqn:linear}), we aslo know that \(\mu = \sigma \theta^T x\). At each sample point \((x^{(i)}, y^{(i)})\),

Hence,

To maximize \(l(\theta)\) is then the same as minmimize

Since \(\sigma\) is fixed, we can absorb \(\sigma\) into \(\theta\) so that our learning algorithm is in the standard form

where \(\theta\) minimize \(J(\theta)\). We rediscover linear regression from normal distributions. This discovery also hints that we should use linear regression where \(y\) is normally distributed according to \(x\) with fixed variant.

## Logistic regression

The core of logistic problems is to group data into two category. For example, to determine whether a tumor is benign or malignant according to a set of medical features is a logistic problems. In the actual world, instead of being 100% certain, a doctor might tell a patient that there is, say 99% chances, that a tumor is benign. So a logistic regression is to consume an input and output the probability of being positive. (In the tumor example above, being positive means the tumor is benign.) The probability of being positive or not simply obeys Bernoulli distribution. Therefore, to obtain the learning algorithm for logistic regression, we start off the exponential family of Bernoulli distributions parametrized by \(\phi\):

**Example 2** gives us

and

So,

Thus, the learning algorithm for logistic regression is

At each sample point \((x^{(i)}, y^{(i)})\), the relative likelihood is given by

Thus, to determine the best \(\theta\), we need to maximize

## Poisson regression

Suppose needed to design a learning algorithm to model some counting problems, Poisson distribution might be a good choice. In this section, we assume that sample points are distributed according to Poisson distribution parametried by mean \(\mu\):

where \(y = 0, 1, 2, \dots\). We need to rewrite \(P(y; \mu)\) into an exponential family.

Therefore, \(P(y; \mu)\) is indeed an exponential family and

Since \(T(y)=y\),

We obtain our learning algorithm: \(h_\theta(x) = e^{\theta^T x}\). Again, we use maximum likelihood estimation to determine the best \(\theta\). At each sample point \((x^{(i)}, y^{(i)})\), the relative likelihood is

Therefore,

The desired \(\theta\) is the one maximizing \(l(\theta)\), or equivalently, the one maximizing