# Expectation-Maximization algorithm

In this article, I will collect my notes on Expectation-Maximization algorithm (EM) based on lecture 12 and 13 of Andrew Ng's online course. Given a set of unlabeled data points EM tries iteratively to determine the distribution of data, assuming that all data points are implicitly labeled (unobserved latent variables). For simplicity, we shall limit ourselves to the case where there are only finitely many implicit labels.

## Description of the problem

Given a set of unlabeled data $\{x^{(1)}, \dots, x^{(m)}\}$, our goal is to determine $P(x)$, the distribution of $x$, with the following assumptions.

Assumptions.

1. There are finitely many unobserved latent variables $z \in \{1, \dots, k\}$ and they obey some multinomial distribution, i.e., $P(z=j) = \phi_j$ with $\sum \phi_j = 1$.

2. $\{P(x|z=j; a_j): j=1, \dots, k\}$ are a family of uniformly parametrized distribution.

Assumptions 1 and 2 will gives us a set of parameters $\theta = (\phi_1, \dots, \phi_j, a_1,\dots, a_j)$ and

$$$$P(x; \theta) = \sum_{j=1}^k P(x|z=j; \theta)P(z=j; \theta). \label{px}$$$$

We want to find this set of parameters so that the likelihood function

$$L(\theta) = \prod_{i=1}^m P(x^{(i)}) = \prod_{i=1}^m \sum_{j=1}^k P(x^{(i)}|z=j; \theta)P(z=j; \theta).$$

is maximized. Or equivalently, the log likelihood function below is maximized:

$$$$l(\theta) = \sum_{i=1}^m \log\left(\sum_{j=1}^k P(x^{(i)}, z=j; \theta)\right), \label{log-likelihood}$$$$

where

$$P(x^{(i)}, z=j; \theta) = P(x^{(i)}|z=j; \theta)P(z=j; \theta).$$

# LeetCode Contest 57

x[0:1], x[0:2], ..., x[0:len(x)]

class Solution(object):
def longestWord(self, words):
"""
:type words: List[str]
:rtype: str
"""
words.sort(key=lambda x: len(x))
tree = {'#': {}}
ans = ''
for word in words:
p = tree
ok = True
for c in word:
if c not in p:
p[c] = {}
ok = ok and '#' in p
p = p[c]
p['#'] = {}
if ok and (len(word) > len(ans) or word < ans):
ans = word
return ans


# LeetCode Contest 54

class Solution(object):
def findShortestSubArray(self, nums):
"""
:type nums: List[int]
:rtype: int
"""
cnt = {}
f = 0
for i, n in enumerate(nums):
if n not in cnt:
cnt[n] = [1, i, i]
else:
c, a, _ = cnt[n]
cnt[n] = [c+1, a, i]
f = max(f, cnt[n][0])
return min(v[2]-v[1]+1 for v in cnt.values() if v[0]==f)


# LeetCode Contest 53

class Solution(object):
def hasAlternatingBits(self, n):
"""
:type n: int
:rtype: bool
"""
b = bin(n)[2:]
return all(b[i]!=b[i+1] for i in range(len(b)-1))


# Support Vector Machine

## Intuition

In a binary classification problem, we can use logistic regression

$$h_\theta(x) = \frac{1}{1+e^{-\theta^T x}} = g(\theta^T x),$$

where $g$ is the sigmoid function with a figure of it below.

Then given input $x$, the model predicts $1$ if and only if $\theta^x \ge 0$, in which case $h_\theta(x) = g(\theta^T x) \ge 0.5$; and it predicts $0$ if and only if $\theta^T x < 0$. Moreover, based on the shape of sigmoid function, if $\theta^T x >> 0$, we are very confident that $y=1$. Likewise, if $\theta^T x << 0$, we are very confident that $y=0$. Therefore, we hope that for the training set $\{(x^{(i)}, y^{(i)})\}_{i=1}^m$, we can find such a $\theta$ that $\theta^T x^{(i)} >> 0$ if $y^{(i)}=1$ and $\theta^T x^{(i)} << 0$ if $y^{(i)}=0$.

# Generative Model

This article is my notes on generative model for Lecture 5 and 6 of Machine Learning by Andrew Ng. What we do in logistic regression using generalized linear model is that, we approximate $P(y|x)$ using given data. This kind of learning algorithms is discriminative, in which we predict $y$ based on the input features $x$. On the contrary, generative model is to model $P(x|y)$, the probability of the features $x$ given class $y$. In other words, we want to study how the features structure looks like given a class $y$. If we also learn what $P(y)$ is, we can easily recover $P(y|x)$, for example, in the binary classification problem,

$$$$P(y=1|x) = \frac{P(x|y=1)P(y=1)}{P(x)}, \label{eqn:bayes}$$$$

where $P(x) = P(x|y=0)P(y=0) + P(x|y=1)P(y=1)$.

In this article, we are going to see a simple example of generative model on Gaussian discriminant analysis and Naive Bayes.

# LeetCode Contest 52

i = (len(A) + len(B) - 1) / len(A)

class Solution(object):
def repeatedStringMatch(self, A, B):
"""
:type A: str
:type B: str
:rtype: int
"""
i = (len(A) + len(B) - 1) / len(A)
j = 2 * i
ans = -1
while i <= j:
k = (i+j) / 2
tmp = A * k
if tmp.find(B) != -1:
ans = k
j = k - 1
else:
i = k + 1
return ans


# Generalized Linear Model (Examples)

This article is a companion article to my another post Generalized Linear Model. In this article, I will implement some of the learning algorithms in Generalized Linear Model. To be more specific, I will do some examples on linear regression and logistic regression. With some effort, google search gives me some very good example data sets to work with. The datasets collected by Larry Winner is one of the excellent sets, which will be used in the article.

The implementations here use Python. Required 3rd party libraries are:

# LeetCode Contest 51

1. 整数：代表本轮得分
2. "+"：本轮得分为上两轮得分之和
3. "D"：本轮得分为上轮得分的两倍
4. "C"：上轮得分无效。

class Solution(object):
def calPoints(self, ops):
"""
:type ops: List[str]
:rtype: int
"""
points = []
for op in ops:
if op == 'C':
points.pop()
elif op == 'D':
points.append(points[-1]*2)
elif op == '+':
points.append(points[-1]+points[-2])
else:
points.append(int(op))
return sum(points)