# Welcome to Randal Douc's wiki

A collaborative site on maths but not only!

• Theatre
• Research
• Teaching

### Miscellanous

world:random-walk

$$\newcommand{\arginf}{\mathrm{arginf}} \newcommand{\argmin}{\mathrm{argmin}} \newcommand{\argmax}{\mathrm{argmax}} \newcommand{\asconv}[1]{\stackrel{#1-a.s.}{\rightarrow}} \newcommand{\Aset}{\mathsf{A}} \newcommand{\b}[1]{{\mathbf{#1}}} \newcommand{\ball}[1]{\mathsf{B}(#1)} \newcommand{\bproof}{\textbf{Proof :}\quad} \newcommand{\bmuf}[2]{b_{#1,#2}} \newcommand{\card}{\mathrm{card}} \newcommand{\chunk}[3]{{#1}_{#2:#3}} \newcommand{\convprob}[1]{\stackrel{#1-\text{prob}}{\rightarrow}} \newcommand{\Cov}{\mathbb{C}\mathrm{ov}} \newcommand{\CPE}[2]{\PE\lr{#1| #2}} \renewcommand{\det}{\mathrm{det}} \newcommand{\dimlabel}{\mathsf{m}} \newcommand{\dimU}{\mathsf{q}} \newcommand{\dimX}{\mathsf{d}} \newcommand{\dimY}{\mathsf{p}} \newcommand{\dlim}{\Rightarrow} \newcommand{\e}[1]{{\left\lfloor #1 \right\rfloor}} \newcommand{\eproof}{\quad \Box} \newcommand{\eremark}{</WRAP>} \newcommand{\eqdef}{:=} \newcommand{\eqlaw}{\stackrel{\mathcal{L}}{=}} \newcommand{\eqsp}{\;} \newcommand{\Eset}{ {\mathsf E}} \newcommand{\esssup}{\mathrm{essup}} \newcommand{\fr}[1]{{\left\langle #1 \right\rangle}} \newcommand{\falph}{f} \renewcommand{\geq}{\geqslant} \newcommand{\hchi}{\hat \chi} \newcommand{\Hset}{\mathsf{H}} \newcommand{\Id}{\mathrm{Id}} \newcommand{\img}{\text{Im}} \newcommand{\indi}[1]{\mathbf{1}_{#1}} \newcommand{\indiacc}[1]{\mathbf{1}_{\{#1\}}} \newcommand{\indin}[1]{\mathbf{1}\{#1\}} \newcommand{\itemm}{\quad \quad \blacktriangleright \;} \newcommand{\ker}{\text{Ker}} \newcommand{\klbck}[2]{\mathrm{K}\lr{#1||#2}} \newcommand{\law}{\mathcal{L}} \newcommand{\labelinit}{\pi} \newcommand{\labelkernel}{Q} \renewcommand{\leq}{\leqslant} \newcommand{\lone}{\mathsf{L}_1} \newcommand{\lrav}[1]{\left|#1 \right|} \newcommand{\lr}[1]{\left(#1 \right)} \newcommand{\lrb}[1]{\left[#1 \right]} \newcommand{\lrc}[1]{\left\{#1 \right\}} \newcommand{\lrcb}[1]{\left\{#1 \right\}} \newcommand{\ltwo}[1]{\PE^{1/2}\lrb{\lrcb{#1}^2}} \newcommand{\Ltwo}{\mathrm{L}^2} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mcbb}{\mathcal B} \newcommand{\mcf}{\mathcal{F}} \newcommand{\meas}[1]{\mathrm{M}_{#1}} \newcommand{\norm}[1]{\left\|#1\right\|} \newcommand{\normmat}[1]{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \newcommand{\nset}{\mathbb N} \newcommand{\one}{\mathsf{1}} \newcommand{\PE}{\mathbb E} \newcommand{\PP}{\mathbb P} \newcommand{\projorth}[1]{\mathsf{P}^\perp_{#1}} \newcommand{\Psif}{\Psi_f} \newcommand{\pscal}[2]{\langle #1,#2\rangle} \newcommand{\pscal}[2]{\langle #1,#2\rangle} \newcommand{\psconv}{\stackrel{\PP-a.s.}{\rightarrow}} \newcommand{\qset}{\mathbb Q} \newcommand{\rmd}{\mathrm d} \newcommand{\rme}{\mathrm e} \newcommand{\rmi}{\mathrm i} \newcommand{\Rset}{\mathbb{R}} \newcommand{\rset}{\mathbb{R}} \newcommand{\rti}{\sigma} \newcommand{\section}[1]{==== #1 ====} \newcommand{\seq}[2]{\lrc{#1\eqsp: \eqsp #2}} \newcommand{\set}[2]{\lrc{#1\eqsp: \eqsp #2}} \newcommand{\sg}{\mathrm{sgn}} \newcommand{\supnorm}[1]{\left\|#1\right\|_{\infty}} \newcommand{\thv}{{\theta_\star}} \newcommand{\tmu}{ {\tilde{\mu}}} \newcommand{\Tset}{ {\mathsf{T}}} \newcommand{\Tsigma}{ {\mathcal{T}}} \newcommand{\ttheta}{{\tilde \theta}} \newcommand{\tv}[1]{\left\|#1\right\|_{\mathrm{TV}}} \newcommand{\unif}{\mathrm{Unif}} \newcommand{\weaklim}[1]{\stackrel{\mathcal{L}_{#1}}{\rightsquigarrow}} \newcommand{\Xset}{{\mathsf X}} \newcommand{\Xsigma}{\mathcal X} \newcommand{\Yset}{{\mathsf Y}} \newcommand{\Ysigma}{\mathcal Y} \newcommand{\Var}{\mathbb{V}\mathrm{ar}} \newcommand{\zset}{\mathbb{Z}} \newcommand{\Zset}{\mathsf{Z}}$$

2017/10/07 23:39 ·

# Statement

Let $(U_i)$ be iid Rademacher random variables, i.e. $U_i=1$ or $-1$ with probability $1/2$ and set $S_i=\sum_{j=1}^iU_j$ the associated partial sum. Define $\Delta=\inf\set{t>0}{S_t=0}$. Show that $S_n$ returns to 0 with probability one. What is the law of $\Delta$?

# Proof

Obviously, $\PP(\Delta=2k+1)=0$. It will be useful in some parts of the proof to note that, by symmetry, $\PP(\Delta=2k, S_1>0)=\PP(\Delta=2k, S_1<0)$, so that $\PP(\Delta=2k)=2\PP(\Delta=2k, S_1>0)$ Define \begin{align*} &\alpha_k=\PP(\Delta=2k, S_1>0)\\ &\beta_k=\PP(S_{2k}=0, S_i \geq 0, \forall i \in [1:2k-1]) \end{align*} Note first that $\alpha_1=1/4$. Moreover, for all $k>1$, \begin{align*} \alpha_k&=\PP(\Delta=2k, S_1>0)\\ &=\PP(U_1=1,U_{2k}=-1, U_2+\ldots+U_i\geq 0\ \forall i \in [2:2k-2], U_2+\ldots+U_{2k-1}=0)\\ &= \beta_{k-1}/4 \end{align*} And since $\sum_{k=1}^{\infty} \alpha_k=\PP(\Delta \in 2\nset^*,S_1>0)<\infty$, we deduce that $\sum_{k=1}^{\infty}\beta_k<\infty$. This allows to define $A(z)=\sum_{k=1}^{\infty} \alpha_k z^k$ and $B(z)=\sum_{k=1}^{\infty} \beta_k z^k$ for all $|z| \leq 1$ and the previous identities implies $$A(z)=\frac{z}{4}+\frac{z B(z)}{4}$$ Moreover, by the first entrance decomposition, \begin{align*} \beta_k&=\PP(S_{2k}=0, S_i \geq 0, \forall i \in [1:2k-1])\\ &=\sum_{\ell=1}^{k} \PP(\Delta=2\ell, S_{2k}-S_{2\ell}=0, S_i-\underbrace{S_{2\ell}}_{=0}\geq 0,\ \forall i \in [2\ell+1:2k-1])\\ &=\alpha_k+\sum_{\ell=1}^{k-1} \alpha_\ell \beta_{k-\ell} \end{align*} Multiplying by $z^k$ and summing over $k \in \nset^*$ yields $B(z)=A(z)+A(z)B(z)$. Finally, $$\frac{A(z)-z/4}{z/4}=A(z)\lr{1+\frac{A(z)-z/4}{z/4}}$$ This is equivalent to $0=A^2(z)-A(z)+\frac{z}{4}$ so that $A(z)=\frac{1\pm \sqrt{1-z}}{2}$. Only one of the roots has an expansion with only non-negative coefficients, that is, $A(z)=\frac{1-\sqrt{1-z}}{2}$. This implies that $$\PP(\Delta<\infty)=\sum_{k=1}^\infty \underbrace{\PP(\Delta=2k)}_{2\PP(\Delta=2k,S_1>0)}= 2 A(1)=1$$ that is, with probability 1, this random walk returns to 0.

Moroever, expanding $\frac{1-\sqrt{1-z}}{2}$ yields: $A(z)=\sum_{k=1}^{\infty} \begin{pmatrix}2k-2\\k-1\end{pmatrix} \frac{1}{k 4^k} z^k$. Therefore, by symmetry, $\PP(\Delta=2k)=2\PP(\Delta=2k,S_1>0)=2\begin{pmatrix}2k-2\\k-1\end{pmatrix}\frac{1}{k 4^k}$.

# Other method

Note that, setting $U_i$ such that $X_i=2U_i-1$, we have that $(U_i)$ are iid Bernoulli random variables with success probability $1/2$. Then, $\PP(S_{2n}=0)=\PP(\sum_{i=1}^{2n} U_i=n)=\frac{(2n)!}{(n!)^2} \lr{\frac14}^{2n} \sim \frac{1}{\sqrt{n\pi}}$ where we have used the Stirling equivalence. This implies that $$\label{eq:one} \infty=\sum_{n=1}^\infty \PP(S_{2n}=0)=\PE[\sum_{n=1}^\infty \indi{0}(S_{2n})]=\PE[\sum_{k=1}^\infty \indi{T^k<\infty}]=\sum_{k=1}^\infty \PP(T^k<\infty)$$ where $T^k$ is the time index of the $k$-th visit of $(S_n)$ to $0$. By convention, we set $T^1=T$. We now show by induction that for all $k\geq 1$, $$\label{eq:induc} \PP(T^k<\infty)=\PP(T<\infty)^k\eqsp.$$ The case $k=1$ obviously holds. Now, assume that \eqref{eq:induc} holds for some $k\geq 1$. Then, \begin{align*} \PP(T^{k+1}<\infty)&=\PP(T^{k}<\infty,T^{k+1}<\infty)=\sum_{m=1}^\infty \sum_{n=1}^\infty\PP(T^{k}=m,T^{k+1}=m+n)\\ &=\sum_{m=1}^\infty \sum_{n=1}^\infty\PP(T^{k}=m,\forall t\in[1:n-1],\ \sum_{\ell=1}^{t}X_{m+\ell}\neq 0,\sum_{\ell=1}^{n}X_{m+\ell}=0)\\ &=\sum_{m=1}^\infty \sum_{n=1}^\infty\PP(T^{k}=m)\underbrace{\PP(\forall t\in[1:n-1],\ \sum_{\ell=1}^{t}X_{m+\ell}\neq 0,\sum_{\ell=1}^{n}X_{m+\ell}=0)}_{\PP(T=n)}\\ &=\PP(T^k<\infty) \PP(T<\infty) \end{align*} and by the induction assumption, we get $\PP(T^{k+1}<\infty)=\PP(T<\infty)^{k+1}$. Plugging \eqref{eq:induc} into \eqref{eq:one} yields $$\infty=\sum_{k=1}^\infty \PP(T<\infty)^k$$ Since $\PP(T<\infty) \in[0,1]$, this implies that $\PP(T<\infty)=1$.