LeetCode Contest 60

这次比赛不难,但是我在最简单的两题各错了一次。

第一题Flood Fill

给定一个二维数组表示一张图片,以及一个坐标(r, c)。我们需要包含这个坐标且数字一样的连通分支整体变成另一个数。

我记得高中的时候学到Flood Fill这个词的时候有种莫名开心,可能是因为这个名字很形象地描述了DFS的过程吧。

class Solution(object):
    def floodFill(self, image, sr, sc, newColor):
        """
        :type image: List[List[int]]
        :type sr: int
        :type sc: int
        :type newColor: int
        :rtype: List[List[int]]
        """
        visited = set()
        n = len(image)
        m = len(image[0])
        color = image[sr][sc]

        def dfs(r, c):
            image[r][c] = newColor
            visited.add((r, c))
            for dr, dc in [(0, 1), (1, 0), (0, -1), (-1, 0)]:
                x, y = r+dr, c+dc
                if x < 0 or y < 0 or x >= n or y >= m or image[x][y] != color:
                    continue
                if (x, y) in visited:
                    continue
                dfs(x, y)

        dfs(sr, sc)
        return image

Read More

蒜蓉鸡蛋面

最近学会了手工鸡蛋面,然后就开始各种黑暗尝试。我第一次的黑暗尝试是在面中加葱,所以揉出来的面是绿色的。

葱蛋面

Read More

Factor Analysis

This article is my notes on the topic of factor analysis. These notes come out of lecture 13 and 14 of Andrew Ng's online course. Roughly speaking, factor analysis models some \(n\) dimensional observed data with the assumption that these data are actually from some \(d\) dimensional plane in \(\R\), up to some Gaussian distributed errors. Let's make it more precise.

Suppose we have a set of observed data \(\{x^{(1)}, \dots, x^{(m)}\}\) implicitly labeled by some latent random variable \(z \in \R^d\) where

$$z \sim \mathcal{N}(0, I).$$


Factor analysis model tries to model \(P(x)\) using the assumption that

$$\begin{equation} x|z \sim \mathcal{N}(\mu+\Lambda z, \Psi), \label{cond-xz} \end{equation}$$


for some \(\mu \in \R^n, \Lambda \in \R^{n \times d}\) and diagonal matrix \(\Psi \in \R^{n \times n}\). These \(\mu, \Lambda\) and \(\Psi\) are parameters of the model.

Read More

Expectation-Maximization algorithm

In this article, I will collect my notes on Expectation-Maximization algorithm (EM) based on lecture 12 and 13 of Andrew Ng's online course. Given a set of unlabeled data points EM tries iteratively to determine the distribution of data, assuming that all data points are implicitly labeled (unobserved latent variables). For simplicity, we shall limit ourselves to the case where there are only finitely many implicit labels.

Description of the problem

Given a set of unlabeled data \(\{x^{(1)}, \dots, x^{(m)}\}\), our goal is to determine \(P(x)\), the distribution of \(x\), with the following assumptions.

Assumptions.

  1. There are finitely many unobserved latent variables \(z \in \{1, \dots, k\}\) and they obey some multinomial distribution, i.e., \(P(z=j) = \phi_j\) with \(\sum \phi_j = 1\).

  2. \(\{P(x|z=j; a_j): j=1, \dots, k\}\) are a family of uniformly parametrized distribution.

Assumptions 1 and 2 will gives us a set of parameters \(\theta = (\phi_1, \dots, \phi_j, a_1,\dots, a_j)\) and

$$\begin{equation} P(x; \theta) = \sum_{j=1}^k P(x|z=j; \theta)P(z=j; \theta). \label{px} \end{equation}$$


We want to find this set of parameters so that the likelihood function

$$L(\theta) = \prod_{i=1}^m P(x^{(i)}) = \prod_{i=1}^m \sum_{j=1}^k P(x^{(i)}|z=j; \theta)P(z=j; \theta).$$


is maximized. Or equivalently, the log likelihood function below is maximized:

$$\begin{equation} l(\theta) = \sum_{i=1}^m \log\left(\sum_{j=1}^k P(x^{(i)}, z=j; \theta)\right), \label{log-likelihood} \end{equation}$$


where

$$P(x^{(i)}, z=j; \theta) = P(x^{(i)}|z=j; \theta)P(z=j; \theta).$$

Read More

Digit recognition, Softmax

在这篇博客中,我将使用softmax模型来识别手写数字。文章的第一部分是关于softmax模型的理论推导,而第二部分则是模型的实现。softmax的本质是一个线性模型,所以推导所需要的理论在我之前的一篇博客Generalized Linear Model已经详细介绍过了。softmax是逻辑回归(logistic regression)的推广:逻辑回归使用Bernoulli分布(二项分布),而softmax使用多项分布。

Read More

Support Vector Machine

This article is my notes on support vector machine for Lecture 7 and 8 of Machine Learning by Andrew Ng.

Intuition

In a binary classification problem, we can use logistic regression

$$h_\theta(x) = \frac{1}{1+e^{-\theta^T x}} = g(\theta^T x),$$


where \(g\) is the sigmoid function with a figure of it below.

Then given input \(x\), the model predicts \(1\) if and only if \(\theta^x \ge 0\), in which case \(h_\theta(x) = g(\theta^T x) \ge 0.5\); and it predicts \(0\) if and only if \(\theta^T x < 0\). Moreover, based on the shape of sigmoid function, if \(\theta^T x >> 0\), we are very confident that \(y=1\). Likewise, if \(\theta^T x << 0\), we are very confident that \(y=0\). Therefore, we hope that for the training set \(\{(x^{(i)}, y^{(i)})\}_{i=1}^m\), we can find such a \(\theta\) that \(\theta^T x^{(i)} >> 0\) if \(y^{(i)}=1\) and \(\theta^T x^{(i)} << 0\) if \(y^{(i)}=0\).

Read More

Generative Model

This article is my notes on generative model for Lecture 5 and 6 of Machine Learning by Andrew Ng. What we do in logistic regression using generalized linear model is that, we approximate \(P(y|x)\) using given data. This kind of learning algorithms is discriminative, in which we predict \(y\) based on the input features \(x\). On the contrary, generative model is to model \(P(x|y)\), the probability of the features \(x\) given class \(y\). In other words, we want to study how the features structure looks like given a class \(y\). If we also learn what \(P(y)\) is, we can easily recover \(P(y|x)\), for example, in the binary classification problem,

$$\begin{equation} P(y=1|x) = \frac{P(x|y=1)P(y=1)}{P(x)}, \label{eqn:bayes} \end{equation}$$


where \(P(x) = P(x|y=0)P(y=0) + P(x|y=1)P(y=1)\).

In this article, we are going to see a simple example of generative model on Gaussian discriminant analysis and Naive Bayes.

Read More

Generalized Linear Model (Examples)

This article is a companion article to my another post Generalized Linear Model. In this article, I will implement some of the learning algorithms in Generalized Linear Model. To be more specific, I will do some examples on linear regression and logistic regression. With some effort, google search gives me some very good example data sets to work with. The datasets collected by Larry Winner is one of the excellent sets, which will be used in the article.

The implementations here use Python. Required 3rd party libraries are:

Read More

Generalized Linear Model

This article on Generalized Linear Model (GLM) is based on the first four lectures of Machine Learning by Andrew Ng. But the structure of the article is quite different from the lecture. I will talk about exponential family of distributions first. Then I will discuss the general idea of GLM. Finally, I will try to derive some well known learning algorithms from GLM.

Exponential Family

A family of distributions is an exponential family if it can be parametrized by vector $\eta$ in the form $$P(y; \eta) = b(y)\exp(\eta^{T} T(y)-a(\eta)),$$ where $T(y)$ and $b(y)$ are (vector-valued) functions in terms of $y$, and $a(\eta)$ is a function in terms of $\eta$.

\(\eta\) is called the natural parameter and \(T(y)\) is called the sufficient statistic.

Read More

简单的python爬虫

一个网络爬虫大致可以分成三个部分:获取网页,提取信息以及保存信息。Python有很多爬虫框架,其中最出名要数Scrapy了。这也是我唯一用过的Python爬虫框架,用起来很省心。让我苦恼的是,Scrapy在我的Raspberry Pi Zero W安装起来很麻烦,而且我觉得我爬取的网页比较容易处理,没有必要用这么重量级的框架。抱着学习的心态,我开始自己造轮子了。

在造轮子之前我找到些轻量级的框架,Sukhoi是我比较喜欢的一个。该库作者iogf使用了自己的异步库、网络库来写这个框架。这让我很佩服他。我写的框架受到了Sukhoi很大的启发与影响。

Read More