This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
world:random-walk [2020/04/21 13:39] rdouc |
world:random-walk [2023/04/18 14:58] (current) rdouc [Statement] |
||
---|---|---|---|
Line 4: | Line 4: | ||
<WRAP center round box 80%> | <WRAP center round box 80%> | ||
- | Let $(U_i)$ be iid Rademacher random variables, i.e. $U_i=1$ or $-1$ with probability $1/2$ and set $S_i=\sum_{j=1}^iU_j$ the associated partial sum. Define $\Delta=\inf\set{t>0}{S_t=0}$. Show that $S_n$ returns to 0 with probability one. What is the law of $\Delta$? | + | Let $(U_i)$ be iid Rademacher random variables, i.e. $U_i=1$ or $-1$ with probability $1/2$ and set $S_i=\sum_{j=1}^iU_j$ the associated partial sum. Define $\Delta=\inf\set{t>0}{S_t=0}$. Show that $S_n$ returns to $0$ with probability one. What is the law of $\Delta$? |
</WRAP> | </WRAP> | ||
Line 44: | Line 44: | ||
- | ====== Comments ====== | + | ====== Other method ====== |
- | + | ||
- | + | ||
- | Maybe there exists a more simple way to find the expression of $\alpha_k$... A particular trajectory has a probability $(1/2)^{2k}=1/4^k$ to occur. It remains to count the number of trajectories. Since $U_1=1$ and $U_{2k}=-1$, it remains to count the number of trajectories of size $2k-2$ such that $S_i\geq 0$ for all intermediate time $i$ and $S_{2k-2}=0$. Without the constraint on the positivity of $S_i$ at each intermediate times, it would have been as if we take $k-1$ among $2k-2$ terms but in that case we count all the trajectories (even those that cross the $X$-axis), that is $\begin{pmatrix}2k-2\\ k-1\end{pmatrix}$. Comparing with the expression we finally obtained, we have find a justification for dividing this number by $k$. An idea would be to split them into well chosen equivalent classes of size $k$ with only one representant in each class with positive trajectories but I can't find this equivalence relation... | + | |
+ | Note that, setting $U_i$ such that $X_i=2U_i-1$, we have that $(U_i)$ are iid Bernoulli random variables with success probability $1/2$. Then, $\PP(S_{2n}=0)=\PP(\sum_{i=1}^{2n} U_i=n)=\frac{(2n)!}{(n!)^2} \lr{\frac14}^{2n} \sim \frac{1}{\sqrt{n\pi}}$ where we have used the Stirling equivalence. This implies that | ||
+ | \begin{equation} | ||
+ | \label{eq:one} | ||
+ | \infty=\sum_{n=1}^\infty \PP(S_{2n}=0)=\PE[\sum_{n=1}^\infty \indi{0}(S_{2n})]=\PE[\sum_{k=1}^\infty \indi{T^k<\infty}]=\sum_{k=1}^\infty \PP(T^k<\infty) | ||
+ | \end{equation} | ||
+ | where $T^k$ is the time index of the $k$-th visit of $(S_n)$ to $0$. By convention, we set $T^1=T$. We now show by induction that for all $k\geq 1$, | ||
+ | \begin{equation} \label{eq:induc} | ||
+ | \PP(T^k<\infty)=\PP(T<\infty)^k\eqsp. | ||
+ | \end{equation} | ||
+ | The case $k=1$ obviously holds. Now, assume that \eqref{eq:induc} holds for some $k\geq 1$. Then, | ||
+ | \begin{align*} | ||
+ | \PP(T^{k+1}<\infty)&=\PP(T^{k}<\infty,T^{k+1}<\infty)=\sum_{m=1}^\infty \sum_{n=1}^\infty\PP(T^{k}=m,T^{k+1}=m+n)\\ | ||
+ | &=\sum_{m=1}^\infty \sum_{n=1}^\infty\PP(T^{k}=m,\forall t\in[1:n-1],\ \sum_{\ell=1}^{t}X_{m+\ell}\neq 0,\sum_{\ell=1}^{n}X_{m+\ell}=0)\\ | ||
+ | &=\sum_{m=1}^\infty \sum_{n=1}^\infty\PP(T^{k}=m)\underbrace{\PP(\forall t\in[1:n-1],\ \sum_{\ell=1}^{t}X_{m+\ell}\neq 0,\sum_{\ell=1}^{n}X_{m+\ell}=0)}_{\PP(T=n)}\\ | ||
+ | &=\PP(T^k<\infty) \PP(T<\infty) | ||
+ | \end{align*} | ||
+ | and by the induction assumption, we get $\PP(T^{k+1}<\infty)=\PP(T<\infty)^{k+1}$. | ||
+ | Plugging \eqref{eq:induc} into \eqref{eq:one} yields | ||
+ | $$ | ||
+ | \infty=\sum_{k=1}^\infty \PP(T<\infty)^k | ||
+ | $$ | ||
+ | Since $\PP(T<\infty) \in[0,1]$, this implies that $\PP(T<\infty)=1$. |