This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
world:random-walk [2019/06/21 07:49] douc [Proof] |
world:random-walk [2023/04/18 14:58] rdouc [Statement] |
||
---|---|---|---|
Line 1: | Line 1: | ||
+ | {{page>:defs}} | ||
+ | ====== Statement ====== | ||
+ | |||
+ | <WRAP center round box 80%> | ||
+ | Let $(U_i)$ be iid Rademacher random variables, i.e. $U_i=1$ or $-1$ with probability $1/2$ and set $S_i=\sum_{j=1}^iU_j$ the associated partial sum. Define $\Delta=\inf\set{t>0}{S_t=0}$. Show that $S_n$ returns to $0$ with probability one. What is the law of $\Delta$? | ||
+ | </WRAP> | ||
+ | |||
+ | ====== Proof ====== | ||
+ | Obviously, $\PP(\Delta=2k+1)=0$. It will be useful in some parts of the proof to note that, by symmetry, $\PP(\Delta=2k, S_1>0)=\PP(\Delta=2k, S_1<0)$, so that $\PP(\Delta=2k)=2\PP(\Delta=2k, S_1>0)$ | ||
+ | Define | ||
+ | \begin{align*} | ||
+ | &\alpha_k=\PP(\Delta=2k, S_1>0)\\ | ||
+ | &\beta_k=\PP(S_{2k}=0, S_i \geq 0, \forall i \in [1:2k-1]) | ||
+ | \end{align*} | ||
+ | Note first that $\alpha_1=1/4$. Moreover, for all $k>1$, | ||
+ | \begin{align*} | ||
+ | \alpha_k&=\PP(\Delta=2k, S_1>0)\\ | ||
+ | &=\PP(U_1=1,U_{2k}=-1, U_2+\ldots+U_i\geq 0\ \forall i \in [2:2k-2], U_2+\ldots+U_{2k-1}=0)\\ | ||
+ | &= \beta_{k-1}/4 | ||
+ | \end{align*} | ||
+ | And since $\sum_{k=1}^{\infty} \alpha_k=\PP(\Delta \in 2\nset^*,S_1>0)<\infty$, we deduce that $\sum_{k=1}^{\infty}\beta_k<\infty$. This allows to define $A(z)=\sum_{k=1}^{\infty} \alpha_k z^k$ and $B(z)=\sum_{k=1}^{\infty} \beta_k z^k$ for all $|z| \leq 1$ and the previous identities implies | ||
+ | $$ | ||
+ | A(z)=\frac{z}{4}+\frac{z B(z)}{4} | ||
+ | $$ | ||
+ | Moreover, by the first entrance decomposition, | ||
+ | \begin{align*} | ||
+ | \beta_k&=\PP(S_{2k}=0, S_i \geq 0, \forall i \in [1:2k-1])\\ | ||
+ | &=\sum_{\ell=1}^{k} \PP(\Delta=2\ell, S_{2k}-S_{2\ell}=0, S_i-\underbrace{S_{2\ell}}_{=0}\geq 0,\ \forall i \in [2\ell+1:2k-1])\\ | ||
+ | &=\alpha_k+\sum_{\ell=1}^{k-1} \alpha_\ell \beta_{k-\ell} | ||
+ | \end{align*} | ||
+ | Multiplying by $z^k$ and summing over $k \in \nset^*$ yields $B(z)=A(z)+A(z)B(z)$. Finally, | ||
+ | $$ | ||
+ | \frac{A(z)-z/4}{z/4}=A(z)\lr{1+\frac{A(z)-z/4}{z/4}} | ||
+ | $$ | ||
+ | This is equivalent to $0=A^2(z)-A(z)+\frac{z}{4}$ so that $A(z)=\frac{1\pm \sqrt{1-z}}{2}$. Only one of the roots has an expansion with only non-negative coefficients, that is, $A(z)=\frac{1-\sqrt{1-z}}{2}$. This implies that | ||
+ | $$ | ||
+ | \PP(\Delta<\infty)=\sum_{k=1}^\infty \underbrace{\PP(\Delta=2k)}_{2\PP(\Delta=2k,S_1>0)}= 2 A(1)=1 | ||
+ | $$ | ||
+ | that is, with probability 1, this random walk returns to 0. | ||
+ | |||
+ | Moroever, expanding $\frac{1-\sqrt{1-z}}{2}$ yields: $A(z)=\sum_{k=1}^{\infty} \begin{pmatrix}2k-2\\k-1\end{pmatrix} | ||
+ | \frac{1}{k 4^k} z^k$. Therefore, by symmetry, $\PP(\Delta=2k)=2\PP(\Delta=2k,S_1>0)=2\begin{pmatrix}2k-2\\k-1\end{pmatrix}\frac{1}{k 4^k}$. | ||
+ | |||
+ | |||
+ | ====== Other method ====== | ||
+ | |||
+ | Note that, setting $U_i$ such that $X_i=2U_i-1$, we have that $(U_i)$ are iid Bernoulli random variables with success probability $1/2$. Then, $\PP(S_{2n}=0)=\PP(\sum_{i=1}^{2n} U_i=n)=\frac{(2n)!}{(n!)^2} \lr{\frac14}^{2n} \sim \frac{1}{\sqrt{n\pi}}$ where we have used the Stirling equivalence. This implies that | ||
+ | \begin{equation} | ||
+ | \label{eq:one} | ||
+ | \infty=\sum_{n=1}^\infty \PP(S_{2n}=0)=\PE[\sum_{n=1}^\infty \indi{0}(S_{2n})]=\PE[\sum_{k=1}^\infty \indi{T^k<\infty}]=\sum_{k=1}^\infty \PP(T^k<\infty) | ||
+ | \end{equation} | ||
+ | where $T^k$ is the time index of the $k$-th visit of $(S_n)$ to $0$. By convention, we set $T^1=T$. We now show by induction that for all $k\geq 1$, | ||
+ | \begin{equation} \label{eq:induc} | ||
+ | \PP(T^k<\infty)=\PP(T<\infty)^k\eqsp. | ||
+ | \end{equation} | ||
+ | The case $k=1$ obviously holds. Now, assume that \eqref{eq:induc} holds for some $k\geq 1$. Then, | ||
+ | \begin{align*} | ||
+ | \PP(T^{k+1}<\infty)&=\PP(T^{k}<\infty,T^{k+1}<\infty)=\sum_{m=1}^\infty \sum_{n=1}^\infty\PP(T^{k}=m,T^{k+1}=m+n)\\ | ||
+ | &=\sum_{m=1}^\infty \sum_{n=1}^\infty\PP(T^{k}=m,\forall t\in[1:n-1],\ \sum_{\ell=1}^{t}X_{m+\ell}\neq 0,\sum_{\ell=1}^{n}X_{m+\ell}=0)\\ | ||
+ | &=\sum_{m=1}^\infty \sum_{n=1}^\infty\PP(T^{k}=m)\underbrace{\PP(\forall t\in[1:n-1],\ \sum_{\ell=1}^{t}X_{m+\ell}\neq 0,\sum_{\ell=1}^{n}X_{m+\ell}=0)}_{\PP(T=n)}\\ | ||
+ | &=\PP(T^k<\infty) \PP(T<\infty) | ||
+ | \end{align*} | ||
+ | and by the induction assumption, we get $\PP(T^{k+1}<\infty)=\PP(T<\infty)^{k+1}$. | ||
+ | Plugging \eqref{eq:induc} into \eqref{eq:one} yields | ||
+ | $$ | ||
+ | \infty=\sum_{k=1}^\infty \PP(T<\infty)^k | ||
+ | $$ | ||
+ | Since $\PP(T<\infty) \in[0,1]$, this implies that $\PP(T<\infty)=1$. |