Rademacher functions
This post is my note for What is…? seminar on Rademacher function.
For a real number \(x\) in \([0, 1]\), let \((a_1(x), a_2(x), \cdots, a_n(x), \cdots)\) be its binary representation. That is to say, \[\begin{equation} x = \sum_{n=1}^\infty \frac{a_n(x)}{2^n}. \label{20170623-1} \end{equation}\]
For some \(x\), there might be two possible binary representation. Take \(\frac{1}{2}\) for example, it can be represented as \((1, 0, 0, \cdots)\) or \((0, 1, 1, \cdots)\). In this situation, we always prefer the former representation, in which all terms become \(0\) eventually.
Rademacher functions are closely related to binary representation. In fact, we can easily get the following: \[\begin{equation} \sum_{n=1}^{\infty} \frac{r_n(x)}{2^n} = \sum_{n=1}^{\infty}\frac{1}{2^n} - 2\sum_{n=1}^\infty \frac{a_n(x)}{2^n} = 1 - 2x. \label{20170623-2} \end{equation}\]
Another easy fact is the equivalent definition of \(r_n(x)\) as follow. We define \(r: \R \to \{-1, 1\}\) by \[r(x) = (-1)^{\lfloor x \rfloor},\] where \(\lfloor x \rfloor\) is the biggest integer that is less than or equal \(x\). Then we have an alternative definition of Radamecher function as
We are going to reprove two classical results using Radamecher functions.
Proof. This theorem can be proved easily by recursively using \[\sin x = 2 \sin \frac{x}{2} \cos \frac{x}{2},\] and using \[\lim_{x \to 0} \frac{\sin x}{x} = 1.\] But we will use Radamecher functions to give another interesting proof.
Let \(n \ge 1\) be an integer. For any \(c_1, \cdots, c_n \in \R\), consider a function \(f:[0, 1] \to \R\) by \[f(t) = \sum_{j=1}^n c_jr_j(t).\] From Lemma 2, we can see that for any \(j \le n\), \(r_j(t)\) is constant on \[\left[\frac{k}{2^n}, \frac{k+1}{2^n}\right),\] for \(k=0, 1, \cdots, 2^n-1\). Therefore, \(f(t)\) is constant on each above interval. If we fix a \(k\), then all \(t \in \left[\frac{k}{2^n}, \frac{k+1}{2^n}\right)\) share the same first \(n\) terms in their binary representation. From this observation, one can conclude that there is a bijection between \(\Delta_n = \{-1, 1\}^n\) and \[\left\{\left[\frac{k}{2^n}, \frac{k+1}{2^n}\right): k=0, \cdots, 2^n-1\right\}.\] Therefore, \[\begin{equation*} \begin{split} \int_0^1 e^{if(t)} \, dt &= \int_0^1 \exp\left(i\sum_{j=1}^n c_jr_j(t)\right) \\ &= \sum_{k=0}^{2^n-1} \int_{\frac{k}{2^n}}^{\frac{k+1}{2^n}} \exp\left(i\sum_{j=1}^n c_jr_j(t)\right) \, dt \\ &= \sum_{\delta \in \Delta_n} \frac{1}{2^n} \exp\left(i\sum_{j=1}^n c_j \delta_j\right) \\ &= \sum_{\delta \in \Delta_n} \prod_{j=1}^{n} \frac{e^{ic_j\delta_j}}{2}. \end{split} \end{equation*}\] The second to the last equality above coming the bijection mentioned in the previous paragraph. For length \(n\) 01-sequences, those start with 0 and those start with 1 are exactly the same except the first term. Thus, \[\begin{equation*} \sum_{\delta \in \Delta_n} \prod_{j=1}^{n} \frac{e^{ic_j\delta_j}}{2} = \left(\frac{e^{ic_1}+e^{-ic_1}}{2}\right)\sum_{\delta \in \Delta_{n-1}} \prod_{j=2}^{n} \frac{e^{ic_j\delta_j}}{2}. \end{equation*}\] By induction, we have \[\begin{equation*} \sum_{\delta \in \Delta_n} \prod_{j=1}^{n} \frac{e^{ic_j\delta_j}}{2} = \prod_{j=1}^n \left(\frac{e^{ic_j}+e^{-ic_j}}{2}\right) = \prod_{j=1}^n \cos(c_j). \end{equation*}\] We then get an interesting formula \[\begin{equation} \int_0^1 e^{if(t)} \, dt = \prod_{j=1}^n \cos(c_j). \label{20170623-3} \end{equation}\]
If we set \(c_j = \frac{x}{2^j}\), we then have \[\begin{equation*} \int_0^1 \exp \left(ix\sum_{j=1}^n \frac{r_j(t)}{2^j}\right) = \prod_{j=1}^n \cos \left(\frac{x}{2^j}\right). \end{equation*}\] If we let \(n \to \infty\), using \(\eqref{20170623-2}\), \[\begin{equation*} \int_0^1 e^{ix (1-2t)} = \prod_{n=1}^\infty \cos \left(\frac{x}{2^n}\right). \end{equation*}\] Now the left hand side can be easily evaluate to be \(\frac{\sin x}{x}\), which then concludes the theorem. \(\square\)
The proof using Rademacher function is rather involved. However, \(\eqref{20170623-3}\) alone makes it worth the effort. The next application of Rademacher functions seems to me a little bit overkill.
Proof. Well, this is immediate from basic combinatorics. For the sake of application, we can prove it using Rademacher functions. I will only sketch the idea to the proof.
In the proof of Theorem 3, we have known that there is a bijection between \(\Delta_n = \{-1, 1\}^n\) and \[\left\{\left[\frac{k}{2^n}, \frac{k+1}{2^n}\right): k=0, \cdots, 2^n-1\right\}.\] So the outcome space of tossing coins can now identify with a collection of intervals with equal length. This gives the possibility to interpret the problem in terms of Rademacher functions. For a \(t \in [0, 1]\), it prescribes one outcome of tossing coins: if \(r_j(t)\) is \(1\) then \(j\)-th toss gives a head, otherwise, if \(r_j(t)=-1\) then \(j\)-th toss gives a tail. Then that \(l\) heads appear in \(n\) tosses is amount to say \[\sum_{j=1}^n r_j(t) = l - (n-l) = 2l - n.\] If we define \[\phi(t) = \frac{1}{\pi}\int_0^{2\pi}e^{ix\left(\sum_{j=0}^{n}r_j(t)-(2l-n)\right)}\,dx,\] then the wanted probability can be calculated by \[\begin{equation} \int_0^1 \phi(t) \, dt. \label{20170623-4} \end{equation}\]
This is because \(\phi(t)=1\) if and only if \(\sum_{j=1}^n r_j(t) = 2l - n\) and \(0\) otherwise. Unfold \(\eqref{20170623-4}\) with a somewhat lengthy but easy calculation, we get the desired result. \(\square\)
On the writing of the post, I found a note by Jordan Bell on Rademacher functions. The talk today followed this note very closely. And the note also contains more applications which are interesting.