烤猪排
来到路村的第一件事是长胖。某人的厨艺实在了得,此次记录的是她的代表作之一——烤猪排。
Believe in Mathematics
这次比赛不难,但是我在最简单的两题各错了一次。
第一题Flood Fill
我记得高中的时候学到Flood Fill这个词的时候有种莫名开心,可能是因为这个名字很形象地描述了DFS的过程吧。
class Solution(object):
def floodFill(self, image, sr, sc, newColor):
"""
:type image: List[List[int]]
:type sr: int
:type sc: int
:type newColor: int
:rtype: List[List[int]]
"""
visited = set()
n = len(image)
m = len(image[0])
color = image[sr][sc]
def dfs(r, c):
image[r][c] = newColor
visited.add((r, c))
for dr, dc in [(0, 1), (1, 0), (0, -1), (-1, 0)]:
x, y = r+dr, c+dc
if x < 0 or y < 0 or x >= n or y >= m or image[x][y] != color:
continue
if (x, y) in visited:
continue
dfs(x, y)
dfs(sr, sc)
return image
This article is my notes on the topic of factor analysis. These notes come out of lecture 13 and 14 of Andrew Ng's online course. Roughly speaking, factor analysis models some \(n\) dimensional observed data with the assumption that these data are actually from some \(d\) dimensional plane in \(\R\), up to some Gaussian distributed errors. Let's make it more precise.
Suppose we have a set of observed data \(\{x^{(1)}, \dots, x^{(m)}\}\) implicitly labeled by some latent random variable \(z \in \R^d\) where
Factor analysis model tries to model \(P(x)\) using the assumption that
for some \(\mu \in \R^n, \Lambda \in \R^{n \times d}\) and diagonal matrix \(\Psi \in \R^{n \times n}\). These \(\mu, \Lambda\) and \(\Psi\) are parameters of the model.
In this article, I will collect my notes on Expectation-Maximization algorithm (EM) based on lecture 12 and 13 of Andrew Ng's online course. Given a set of unlabeled data points EM tries iteratively to determine the distribution of data, assuming that all data points are implicitly labeled (unobserved latent variables). For simplicity, we shall limit ourselves to the case where there are only finitely many implicit labels.
Given a set of unlabeled data \(\{x^{(1)}, \dots, x^{(m)}\}\), our goal is to determine \(P(x)\), the distribution of \(x\), with the following assumptions.
Assumptions 1 and 2 will gives us a set of parameters \(\theta = (\phi_1, \dots, \phi_j, a_1,\dots, a_j)\) and
We want to find this set of parameters so that the likelihood function
is maximized. Or equivalently, the log likelihood function below is maximized:
where
这次比赛考细节/模拟。
第一题Longest Word in Dictionary
我是用trie做的。首先将单词按照长度排序,然后将单词一一放进一个trie中。用trie是因为将单词放入trie中的同时也可以判断单词的前面部分是否在trie里(这也是按长度排序的原因)。
class Solution(object):
def longestWord(self, words):
"""
:type words: List[str]
:rtype: str
"""
words.sort(key=lambda x: len(x))
tree = {'#': {}}
ans = ''
for word in words:
p = tree
ok = True
for c in word:
if c not in p:
p[c] = {}
ok = ok and '#' in p
p = p[c]
p['#'] = {}
if ok and (len(word) > len(ans) or word < ans):
ans = word
return ans
在这篇博客中,我将使用softmax模型来识别手写数字。文章的第一部分是关于softmax模型的理论推导,而第二部分则是模型的实现。softmax的本质是一个线性模型,所以推导所需要的理论在我之前的一篇博客Generalized Linear Model已经详细介绍过了。softmax是逻辑回归(logistic regression)的推广:逻辑回归使用Bernoulli分布(二项分布),而softmax使用多项分布。
这次比赛之后,下次再遇到O(n2)的题目,而且n < 1000的时候,一定要用C++!这是泪的教训啊😭
第一题Degree of an Array
在统计每个数字频率的同时,记录这个数字出现的最左和最右位置(l, r)。那么包含这个数字的最短数组的长度为r-l+1。
class Solution(object):
def findShortestSubArray(self, nums):
"""
:type nums: List[int]
:rtype: int
"""
cnt = {}
f = 0
for i, n in enumerate(nums):
if n not in cnt:
cnt[n] = [1, i, i]
else:
c, a, _ = cnt[n]
cnt[n] = [c+1, a, i]
f = max(f, cnt[n][0])
return min(v[2]-v[1]+1 for v in cnt.values() if v[0]==f)
This article is my notes on support vector machine for Lecture 7 and 8 of Machine Learning by Andrew Ng.
In a binary classification problem, we can use logistic regression
where \(g\) is the sigmoid function with a figure of it below.
Then given input \(x\), the model predicts \(1\) if and only if \(\theta^x \ge 0\), in which case \(h_\theta(x) = g(\theta^T x) \ge 0.5\); and it predicts \(0\) if and only if \(\theta^T x < 0\). Moreover, based on the shape of sigmoid function, if \(\theta^T x >> 0\), we are very confident that \(y=1\). Likewise, if \(\theta^T x << 0\), we are very confident that \(y=0\). Therefore, we hope that for the training set \(\{(x^{(i)}, y^{(i)})\}_{i=1}^m\), we can find such a \(\theta\) that \(\theta^T x^{(i)} >> 0\) if \(y^{(i)}=1\) and \(\theta^T x^{(i)} << 0\) if \(y^{(i)}=0\).