Welcome to Randal Douc's wiki

A collaborative site on maths but not only!

User Tools

Site Tools


world:random-walk

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
world:random-walk [2020/03/19 17:50]
127.0.0.1 modification externe
world:random-walk [2023/04/18 14:58] (current)
rdouc [Statement]
Line 4: Line 4:
  
 <WRAP center round box 80%> <WRAP center round box 80%>
-Let $(U_i)$ be iid  Rademacher random variables, i.e. $U_i=1$ or $-1$ with probability $1/2$ and set $S_i=\sum_{j=1}^iU_j$ the associated partial sum. Define $\Delta=\inf\set{t>​0}{S_t=0}$. What is the law of $\Delta$?+Let $(U_i)$ be iid  Rademacher random variables, i.e. $U_i=1$ or $-1$ with probability $1/2$ and set $S_i=\sum_{j=1}^iU_j$ the associated partial sum. Define $\Delta=\inf\set{t>​0}{S_t=0}$. Show that $S_n$ returns to $0$ with probability one. What is the law of $\Delta$? ​
 </​WRAP>​ </​WRAP>​
  
 ====== Proof ====== ====== Proof ======
-Obviously, $\PP(\Delta=2k+1)=0$.+Obviously, $\PP(\Delta=2k+1)=0$. ​It will be useful in some parts of the proof to note that, by symmetry, $\PP(\Delta=2k,​ S_1>​0)=\PP(\Delta=2k,​ S_1<0)$, so that $\PP(\Delta=2k)=2\PP(\Delta=2k,​ S_1>0)$
 Define ​ Define ​
 \begin{align*} \begin{align*}
Line 34: Line 34:
 \frac{A(z)-z/​4}{z/​4}=A(z)\lr{1+\frac{A(z)-z/​4}{z/​4}} \frac{A(z)-z/​4}{z/​4}=A(z)\lr{1+\frac{A(z)-z/​4}{z/​4}}
 $$ $$
-This is equivalent to $0=A^2(z)-A(z)+\frac{z}{4}$ so that $A(z)=\frac{1\pm \sqrt{1-z}}{2}$. Only one of the roots has an expansion with only non-negative coefficients,​ that is, $A(z)=\frac{1-\sqrt{1-z}}{2}$. ​Expanding the expression ​yields: $A(z)=\sum_{k=1}^{\infty} \begin{pmatrix}2k-2\\k-1\end{pmatrix}+This is equivalent to $0=A^2(z)-A(z)+\frac{z}{4}$ so that $A(z)=\frac{1\pm \sqrt{1-z}}{2}$. Only one of the roots has an expansion with only non-negative coefficients,​ that is, $A(z)=\frac{1-\sqrt{1-z}}{2}$. ​This implies that  
 +$$ 
 +\PP(\Delta<​\infty)=\sum_{k=1}^\infty \underbrace{\PP(\Delta=2k)}_{2\PP(\Delta=2k,​S_1>​0)}= 2 A(1)=1 
 +$$ 
 +that is, with probability 1, this random walk returns to 0.  
 + 
 +Moroever, expanding $\frac{1-\sqrt{1-z}}{2}$ ​yields: $A(z)=\sum_{k=1}^{\infty} \begin{pmatrix}2k-2\\k-1\end{pmatrix}
 \frac{1}{k 4^k} z^k$. Therefore, by symmetry, $\PP(\Delta=2k)=2\PP(\Delta=2k,​S_1>​0)=2\begin{pmatrix}2k-2\\k-1\end{pmatrix}\frac{1}{k 4^k}$. ​ \frac{1}{k 4^k} z^k$. Therefore, by symmetry, $\PP(\Delta=2k)=2\PP(\Delta=2k,​S_1>​0)=2\begin{pmatrix}2k-2\\k-1\end{pmatrix}\frac{1}{k 4^k}$. ​
  
-====== Comments ====== 
  
 +====== Other method ======
  
-Maybe there exists a more simple way to find the expression of $\alpha_k$... A particular trajectory has a probability ​$(1/2)^{2k}=1/4^kto occur. It remains to count the number of trajectoriesSince $U_1=1$ and $U_{2k}=-1$, it remains to count the number ​of trajectories of size $2k-2such that $S_i\geq ​0$ for all intermediate time $i$ and $S_{2k-2}=0$Without the constraint on the positivity of $S_iat each intermediate timesit would have been as if we take $k-1$ among $2k-2$ terms but in that case we count all the trajectories ​(even those that cross the $X$-axis), that is $\begin{pmatrix}2k-2\\ k-1\end{pmatrix}$. Comparing with the expression we finally obtained, we have find a justification for dividing this number by $k$. An idea would be to split them into well chosen equivalent classes of size $k$ with only one representant ​in each class with positive trajectories but I can't find this equivalence relation..+Note that, setting ​$U_isuch that $X_i=2U_i-1$,​ we have that $(U_i)$ are iid Bernoulli random variables with success probability $1/2$. Then,   ​$\PP(S_{2n}=0)=\PP(\sum_{i=1}^{2nU_i=n)=\frac{(2n)!}{(n!)^2} \lr{\frac14}^{2n} \sim \frac{1}{\sqrt{n\pi}}where we have used the Stirling equivalenceThis implies that 
 +\begin{equation} 
 +\label{eq:​one} 
 +\infty=\sum_{n=1}^\infty \PP(S_{2n}=0)=\PE[\sum_{n=1}^\infty \indi{0}(S_{2n})]=\PE[\sum_{k=1}^\infty \indi{T^k<​\infty}]=\sum_{k=1}^\infty \PP(T^k<​\infty) 
 +\end{equation} 
 +where $T^k$ is the time index of the $k$-th visit of $(S_n)$ to $0$. By convention, we set $T^1=T$. We now show by induction that for all $k\geq 1$
 +\begin{equation\label{eq:​induc} 
 +\PP(T^k<​\infty)=\PP(T<​\infty)^k\eqsp 
 +\end{equation} 
 +The case $k=1obviously holds. Nowassume that \eqref{eq:​induc} holds for some $k\geq 1$. Then,  
 +\begin{align*} 
 +\PP(T^{k+1}<​\infty)&​=\PP(T^{k}<​\infty,T^{k+1}<\infty)=\sum_{m=1}^\infty \sum_{n=1}^\infty\PP(T^{k}=m,​T^{k+1}=m+n)\\ 
 +&​=\sum_{m=1}^\infty \sum_{n=1}^\infty\PP(T^{k}=m,​\forall t\in[1:n-1],\ \sum_{\ell=1}^{t}X_{m+\ell}\neq 0,​\sum_{\ell=1}^{n}X_{m+\ell}=0)\\ 
 +&​=\sum_{m=1}^\infty \sum_{n=1}^\infty\PP(T^{k}=m)\underbrace{\PP(\forall t\in[1:n-1],\ \sum_{\ell=1}^{t}X_{m+\ell}\neq 0,​\sum_{\ell=1}^{n}X_{m+\ell}=0)}_{\PP(T=n)}\\ 
 +&​=\PP(T^k<​\infty) \PP(T<​\infty) 
 +\end{align*} 
 +and by the induction assumption, we get $\PP(T^{k+1}<​\infty)=\PP(T<​\infty)^{k+1}$.  
 +Plugging \eqref{eq:​induc} ​into \eqref{eq:​one} yields 
 +$
 +\infty=\sum_{k=1}^\infty \PP(T<​\infty)^k 
 +$
 +Since $\PP(T<​\infty) \in[0,​1]$, ​this implies that $\PP(T<​\infty)=1$
world/random-walk.1584636617.txt.gz · Last modified: 2022/03/16 01:37 (external edit)