# Simple Grid Detection for Sudoku

Sudoku is a very interesting puzzle, and it is very easy to get familiar with the rules of the game. I always like to compare sudoku with number theory, they are easy to be understood but can be hard to solve. Fermat's Last Theorem, an easily understood problem even by middle school students, took mathematians 358 years to solve it completely. Solving sudoku puzzles can take even longer: it is proven that solving sudoku puzzels is an NP-complete problem! However, classical 9x9 sudoku puzzles are still feasible using backtracking algorithm. We are not going to talk about how to solve sudoku puzzles here though, we are trying to sovle another problem: detect grids in images containing sudoku puzzles.

To make the problem more concrete, let's make some assumption on the image that contains a sudoku puzzle:

1. The image only contains one sudoku puzzle
2. The sudoku puzzle is the largest block in the image
3. There is no/little distorsion/rotation of the sudoku puzzle grid

Also in this article, we focus on only simple solutions without heavy image processing. We will use Pillow to read images and convert them to gray scale images, and we use Numpy to manipulate images as arrays. For visualization, we use Matplotlib.

# Pytorch on RaspberryPi

Finally, I successfully installed Pytorch on my RaspberryPi 3B. I followed the instructions on How to install PyTorch v0.3.1 on RaspberryPi 3B and a blog post (in Chinese) 在 RaspberryPi 上编译 PyTorch. I am running latest raspian image 2018-04-18-raspbian-stretch and a self-compiled Python 3.6.5. If you are interested in compile Python 3.6.5 or above on your own, check out Installing Python 3.6 on Raspbian. In this post, I will write down steps to my successful installation, and share my compiled wheel file at the end.

# Chapi

I recently built a flask app that run on my raspberry pi. Then I turned it down. Why? In order to access the app from outside the home internet, I had to expose my pi to public. After only one day of exposure, my pi received several attack attempts and the flask app then always replied 301. Here are some attack logs:

139.162.83.10 - "GET / HTTP/1.1" 404 -
45.33.115.189 - "GET / HTTP/1.0" 404 -
79.55.11.21 - "GET / HTTP/1.0" 404 -
123.151.42.61 - "GET http://www.baidu.com/ HTTP/1.1" 404 -
158.85.81.116 - "GET / HTTP/1.1" 404 -


To be more precise, the flask app was still running, but the home router seemed not forwarding the requests to my pi. Anyway, I have no idea of what have happened. I just shut the app down, flashed a fresh new image to my pi.

I want a way to communicate with my pi without exposing it. That is the motivation I start the project chapi, which means "chat with pi".

# 一个“朴素”的推荐系统

## Introduction

$$P(like|aritcle)$$

$$P(like|aritcle) = \frac{P(article|like)P(like)}{P(article|like)P(like)+P(article|dislike)P(dislike)}$$

# LeetCode Contest 78

"9001 discuss.leetcode.com"

class Solution:
def subdomainVisits(self, cpdomains):
"""
:type cpdomains: List[str]
:rtype: List[str]
"""
cnt = collections.defaultdict(int)
for cp in cpdomains:
while p != -1:
return ['{} {}'.format(v, k) for k, v in cnt.items()]


# Markov Decision Process: Finite horizon

This post is considered to the notes on finite horizon Markov decision process for lecture 18 in Andrew Ng's lecture series. In my previous two notes ([1], [2]) about Markov decision process (MDP), only state rewards are considered. We can easily generalize MDP to state-action reward.

## State-Action Reward

Our reward function now is a function in terms of both states and actions. More precisely, the reward function is a function

$$R: S \times A \to \R.$$

All other requirements in definition of MDP will remain intact. For completeness, we include the definition here. We shall pay attention to the fifth components.

I wrote this post for lecture 17 in Andrew Ng's lecture collections on Machine Learning. In my previous post, we discussed Markov Decision Process (MDP) in its simplest form, where the set of states and the set of actions are both finite. But in real world application, states and actions can be infinite and even continuous. For example, if we want to model states of a self-driving car in a 2D plane, we must at least have the position $(x, y)$, the direction $\theta$ of the car pointing to, its velocity $(v_x, v_y)$ and the rate $r$ of change in $\theta$. So the states of a car is at least a 6 dimensional space. For actions of a car, we can control how fast it goes in direction $\theta$ and we can also control $r$. Thus the actions have dimension 2.
In this post, we consider only continuous states with finite actions. Indeed, actions space usually has much lower dimension than states space, so in case of continuous actions, we might just discretize the actions spaces to get a finite set of representatives of actions. One may argue that we can also discretize the states space. Yes, we can do it, but only when the dimension $n$ of state space is small enough: if we discretize each dimension into $k$ parts, then there would be $k^n$ many states. If $n$ is large, then $k^n$ is not feasible. This is so called the curse of dimensionality. Moreover, discretizing states space usually results in lack of smoothness.