Welcome to Randal Douc's wiki

A collaborative site on maths but not only!

User Tools

Site Tools


world:lln

A short proof of the Strong Law of Large Numbers...

Statement

$$ \newcommand{\PE}{\mathbb E} \newcommand{\PP}{\mathbb P} \newcommand{\eqsp}{\quad} \newcommand{\nset}{\mathbb N} $$

Theorem: Strong Law of Large numbers.
If $(X_i)$ are iid random variables and $\PE[|X_1|]<\infty$, then $$ \lim_{n \to \infty} n^{-1} \sum_{i=1}^n X_i=\PE[X_1]\eqsp a.s. $$

We follow the approach of Bernard Delyon in his unpublished lecture notes on dynamical systems. We just made minor adaptations to fit with the particular case of iid random variables. The beginning of the proof is close to Neveu's approach but then it differs substantially. The proof is based on the following elementary lemma:

Lemma: Let $(Y_i)$ be iid random variables such that $\PE[|Y_1|]<\infty$ and $\PE[Y_1]>0$, then a.s.,

$$\liminf_n S_n/n\geq 0$$

where $S_n=\sum_{i=1}^n Y_i$.

Proof (of the Lemma) $\blacktriangleright$ Set $L_n=\inf(S_k, k \in [1:n])$, $L_\infty=\inf(S_k, k \in \nset^*)$, $A=\{L_\infty=-\infty\}$. Let $\theta(y_1,y_2,\ldots,)=(y_2,y_3,\ldots)$ be the shift operator. Then, a.s., \begin{align*} L_n &=S_1+\inf(0,S_2-S_1,\ldots, S_n-S_1)=Y_1+\inf(0,L_{n-1} \circ \theta)\\ &\geq Y_1 + \inf(0,L_n \circ \theta)=Y_1 - L_n^- \circ \theta. \end{align*} where the inequality follows from the fact that $n \mapsto L_n$ is nonincreasing. This implies a.s. (since $L_n^- \circ \theta$ is a.s. finite) $$ 1_A Y_1 \leq 1_A L_n +1_A L_n^- \circ \theta $$ Taking the expectation on both sides and then, using $\PP(1_A=1_A \circ \theta)=1$, and the strong stationarity of the sequence: $$ \PE[1_A Y_1] \leq \PE[1_A L_n]+\PE[1_A\circ \theta\ L_n^- \circ \theta]=\PE[1_A L_n]+\PE[1_A L^-_n]=\PE[1_A L^+_n] \to 0 $$ where the right-hand side tends to 0 by the dominated convergence theorem since a.s. $\lim_n 1_AL_n^+ =1_AL_\infty^+ =0$ and $0\leq 1_AL_n^+ \leq Y_1^+$. Finally $\PE[1_A Y_1]\leq 0$. Therefore, noting that $1_A \circ \theta$ is independent from $Y_1$, $$ 0 \geq \PE[1_A Y_1]=\PE[1_A\circ \theta\ Y_1]=\PE[1_A\circ \theta]\PE[Y_1]=\underbrace{\PE[1_A]}_{\geq 0} \underbrace{\PE[Y_1]}_{>0} $$ This implies $\PP(A)=0$ and the lemma is proved.$\blacktriangleleft$


Proof (of the Theorem)

Without loss of generality, we assume that $\PE[X_1]=0$. Applying the lemma with $Y_i=X_i+\epsilon$ (where $\epsilon>0$), we get $\liminf n^{-1} \sum_{i=1}^n X_i \geq -\epsilon \quad a.s.$ And applying again the lemma with $Y_i=-X_i+\epsilon$, we get a.s., $\limsup n^{-1} \sum_{i=1}^n X_i \leq \epsilon$ which finishes the proof since $\epsilon$ is arbitrary. $\blacksquare$


Another approach

Another approach, due to Etamadi 1), shows the result with identically distributed and pairwise independent random variables. He starts with nonnegative random variables. He first shows that for $Y_i=X_i{\mathsf 1}_{\{X_i<i\}}$, the normalized sum $S_{k_n}/k_n$ converges almost surely using that $k_n=\lfloor \alpha^n\rfloor$. The second step shows that $Y_i\neq X_i$ only a finite number of times ans the last step is to let $\alpha$ goes to 1.

1)
An Elementary Proof of the Strong Law of Large Numbers, 1981, Etamadi
world/lln.txt · Last modified: 2022/03/16 07:40 (external edit)