Welcome to Randal Douc's wiki

A collaborative site on maths but not only!

User Tools

Site Tools


world:marcinkiewicz

This is an old revision of the document!


$$ \newcommand{\arginf}{\mathrm{arginf}} \newcommand{\argmin}{\mathrm{argmin}} \newcommand{\argmax}{\mathrm{argmax}} \newcommand{\asconv}[1]{\stackrel{#1-a.s.}{\rightarrow}} \newcommand{\Aset}{\mathsf{A}} \newcommand{\b}[1]{{\mathbf{#1}}} \newcommand{\ball}[1]{\mathsf{B}(#1)} \newcommand{\bbQ}{{\mathbb Q}} \newcommand{\bproof}{\textbf{Proof :}\quad} \newcommand{\bmuf}[2]{b_{#1,#2}} \newcommand{\card}{\mathrm{card}} \newcommand{\chunk}[3]{{#1}_{#2:#3}} \newcommand{\condtrans}[3]{p_{#1}(#2|#3)} \newcommand{\convprob}[1]{\stackrel{#1-\text{prob}}{\rightarrow}} \newcommand{\Cov}{\mathbb{C}\mathrm{ov}} \newcommand{\cro}[1]{\langle #1 \rangle} \newcommand{\CPE}[2]{\PE\lr{#1| #2}} \renewcommand{\det}{\mathrm{det}} \newcommand{\dimlabel}{\mathsf{m}} \newcommand{\dimU}{\mathsf{q}} \newcommand{\dimX}{\mathsf{d}} \newcommand{\dimY}{\mathsf{p}} \newcommand{\dlim}{\Rightarrow} \newcommand{\e}[1]{{\left\lfloor #1 \right\rfloor}} \newcommand{\eproof}{\quad \Box} \newcommand{\eremark}{</WRAP>} \newcommand{\eqdef}{:=} \newcommand{\eqlaw}{\stackrel{\mathcal{L}}{=}} \newcommand{\eqsp}{\;} \newcommand{\Eset}{ {\mathsf E}} \newcommand{\esssup}{\mathrm{essup}} \newcommand{\fr}[1]{{\left\langle #1 \right\rangle}} \newcommand{\falph}{f} \renewcommand{\geq}{\geqslant} \newcommand{\hchi}{\hat \chi} \newcommand{\Hset}{\mathsf{H}} \newcommand{\Id}{\mathrm{Id}} \newcommand{\img}{\text{Im}} \newcommand{\indi}[1]{\mathbf{1}_{#1}} \newcommand{\indiacc}[1]{\mathbf{1}_{\{#1\}}} \newcommand{\indin}[1]{\mathbf{1}\{#1\}} \newcommand{\itemm}{\quad \quad \blacktriangleright \;} \newcommand{\jointtrans}[3]{p_{#1}(#2,#3)} \newcommand{\ker}{\text{Ker}} \newcommand{\klbck}[2]{\mathrm{K}\lr{#1||#2}} \newcommand{\law}{\mathcal{L}} \newcommand{\labelinit}{\pi} \newcommand{\labelkernel}{Q} \renewcommand{\leq}{\leqslant} \newcommand{\lone}{\mathsf{L}_1} \newcommand{\lrav}[1]{\left|#1 \right|} \newcommand{\lr}[1]{\left(#1 \right)} \newcommand{\lrb}[1]{\left[#1 \right]} \newcommand{\lrc}[1]{\left\{#1 \right\}} \newcommand{\lrcb}[1]{\left\{#1 \right\}} \newcommand{\ltwo}[1]{\PE^{1/2}\lrb{\lrcb{#1}^2}} \newcommand{\Ltwo}{\mathrm{L}^2} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mcbb}{\mathcal B} \newcommand{\mcf}{\mathcal{F}} \newcommand{\meas}[1]{\mathrm{M}_{#1}} \newcommand{\norm}[1]{\left\|#1\right\|} \newcommand{\normmat}[1]{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \newcommand{\nset}{\mathbb N} \newcommand{\N}{\mathcal{N}} \newcommand{\one}{\mathsf{1}} \newcommand{\PE}{\mathbb E} \newcommand{\pminfty}{_{-\infty}^\infty} \newcommand{\PP}{\mathbb P} \newcommand{\projorth}[1]{\mathsf{P}^\perp_{#1}} \newcommand{\Psif}{\Psi_f} \newcommand{\pscal}[2]{\langle #1,#2\rangle} \newcommand{\pscal}[2]{\langle #1,#2\rangle} \newcommand{\psconv}{\stackrel{\PP-a.s.}{\rightarrow}} \newcommand{\qset}{\mathbb Q} \newcommand{\revcondtrans}[3]{q_{#1}(#2|#3)} \newcommand{\rmd}{\mathrm d} \newcommand{\rme}{\mathrm e} \newcommand{\rmi}{\mathrm i} \newcommand{\Rset}{\mathbb{R}} \newcommand{\rset}{\mathbb{R}} \newcommand{\rti}{\sigma} \newcommand{\section}[1]{==== #1 ====} \newcommand{\seq}[2]{\lrc{#1\eqsp: \eqsp #2}} \newcommand{\set}[2]{\lrc{#1\eqsp: \eqsp #2}} \newcommand{\sg}{\mathrm{sgn}} \newcommand{\supnorm}[1]{\left\|#1\right\|_{\infty}} \newcommand{\thv}{{\theta_\star}} \newcommand{\tmu}{ {\tilde{\mu}}} \newcommand{\Tset}{ {\mathsf{T}}} \newcommand{\Tsigma}{ {\mathcal{T}}} \newcommand{\ttheta}{{\tilde \theta}} \newcommand{\tv}[1]{\left\|#1\right\|_{\mathrm{TV}}} \newcommand{\unif}{\mathrm{Unif}} \newcommand{\weaklim}[1]{\stackrel{\mathcal{L}_{#1}}{\rightsquigarrow}} \newcommand{\Xset}{{\mathsf X}} \newcommand{\Xsigma}{\mathcal X} \newcommand{\Yset}{{\mathsf Y}} \newcommand{\Ysigma}{\mathcal Y} \newcommand{\Var}{\mathbb{V}\mathrm{ar}} \newcommand{\zset}{\mathbb{Z}} \newcommand{\Zset}{\mathsf{Z}} $$

2017/10/07 23:39 · douc

$g$-lemma

Let $X$ be a random variable non-negative a.s. and $g \colon \rset_+ \rightarrow \rset_+$ an increasing differentiable function such that $g(0)=0$. Then, \begin{equation*} \PE\lrb{g(X)} = \int_{\rset_+} g'(x) \PP\lr{X \geq x} \rmd x \in \rset_+ \cup \lrc{+\infty}. \end{equation*}

Click to display ⇲

Click to hide ⇱

Write, using that for $x\in \rset_+$, $g'(x) 1_{X\geq x}\geq 0$, \begin{equation*} \int_{\rset_+} g'(x) \underbrace{\PP\lr{X \geq x}}_{\PE\lrb{1_{X\geq x}}} \rmd x = \int_{\rset_+} \PE \lrb{g'(x) 1_{X\geq x}} \rmd x = \PE \lrb{\int_{\rset_+} g'(x) 1_{X\geq x} \rmd x} = \PE \lrb{\int_{0}^X g'(x)\rmd x} = \PE \lrb{g(X) - \underbrace{g(0)}_{=0}}. \end{equation*}

Convexity inequality

Let $p \geq 1$, $n \in \nset$ and $\lr{X_i}_{1\leq i \leq n}$ real-valued random variables. Then, by convexity \begin{equation*} \PE \lrb{\lrav{\sum_{i=1}^n X_i}^p} \leq n^{p-1} \sum_{i=1}^n \PE \lrb{\lrav{X_i}^p}. \end{equation*}

Marcinkiewicz–Zygmund inequality

Let $p \geq 2$, $n \in \nset$ and $\lr{X_i}_{1\leq i \leq n}$ centered independent real-valued random variables in $L^p$. Then, there exists a universal constant $C_p$ depending only on $p$ such that \begin{equation*} \PE \lrb{\lrav{\sum_{i=1}^n X_i}^p} \leq C_p \eqsp n^{p/2-1} \sum_{i=1}^n \PE \lrb{\lrav{X_i}^p}. \end{equation*}

Proof

Set $S_n \eqdef \sum_{i=1}^n X_i$. Let $x > 0$. We first establish an upper-bound for $\PP \lr{\lrav{S_n} \geq x}$.

Let $y > 0$ and define for all $i \in [1;n]$, $Z_i \eqdef X_i 1_{X_i < y}$ and $T_n \eqdef \sum_{i=1}^n Z_i$. Then, \begin{equation} \label{eq:s_n_t_n} \PP \lr{S_n \geq x} \leq \PP \lr{T_n \geq x} + \PP \lr{S_n \neq T_n} \leq \PP \lr{T_n \geq x} + \sum_{i=1}^n \PP \lr{X_i \geq y}. \end{equation} Let $h > 0$. The Chernoff bound and the independence of the $\lr{Z_i}_{1\leq i \leq n}$ by independence of the $\lr{X_i}_{1\leq i \leq n}$ both provide \begin{equation} \label{eq:t_n_only} \PP \lr{T_n \geq x} \leq e^{-hx} \PE\lrb{e^{h T_n}} = e^{-hx} \prod_{i=1}^n \PE\lrb{e^{h Z_i}}. \end{equation} Using the Taylor formula with the exponential function yields that the function defined on $\rset$ by $s \mapsto \frac {e^s-1-s} {s^2} = \frac 1 {s^2} \int_0^s (u-s)e^u \rmd u= \int_0^1 (u-1)e^{su} \rmd u$ is increasing, and together with $Z_i \leq y$ for all $i \in [1;n]$, we deduce \begin{equation*} e^{h Z_i} \leq 1 + h Z_i + Z_i^2 \frac {e^{hy}-1-y} {y^2}. \end{equation*} The fact that $y>0$ implies $Z_i \leq X_i$ and thus $\PE \lrb{Z_i} \leq \PE \lrb{X_i} = 0$. Combining with $\PE \lrb{Z_i^2} = \PE \lrb{X_i^2 1_{X_i < y}} \leq \PE \lrb{X_i^2}$ yields for all $i \in [1;n]$, \begin{equation*} \PE \lrb{e^{h Z_i}} \leq 1 + \PE \lrb{X_i^2} \frac {e^{hy}-1-y} {y^2}. \end{equation*} Together with \eqref{eq:s_n_t_n} and \eqref{eq:t_n_only} this provides \begin{equation} \label{eq:s_n_step_1} \PP \lr{S_n \geq x} \leq \sum_{i=1}^n \PP \lr{X_i \geq y} + \exp \lrb{-hx + B_n \frac {e^{hy}-1-y} {y^2}}, \end{equation} where $B_n \eqdef \sum_{i=1}^n \PE \lrb{X_i^2} < \infty$. Note that $B_n = 0$ implies that the $\lr{X_i}_{1 \leq i \leq n}$ are all equal to zero a.s., a situation where the inequality is trivially true, and we can thus assume $B_n > 0$. The argument of the exponential in \eqref{eq:s_n_step_1} is then minimized in $h$ at $h_{\min} \eqdef \frac 1 y \log \lr{1 + \frac {xy} {B_n}}$, with \begin{equation*} -h_{\min}x + B_n \frac {e^{h_{\min}y}-1-y} {y^2} = - \frac x y \log \lr{1 + \frac {xy} {B_n}} + \frac {B_n} {y^2} \lrb{\frac {xy} {B_n} \underbrace{- \log \lr{1 + \frac {xy} {B_n}}}_{\leq 0}} \leq \frac x y - \frac x y \log \lr{1 + \frac {xy} {B_n}}. \end{equation*}

Click to display ⇲

Click to hide ⇱

The function defined on $\rset_+^*$ by $h \mapsto -hx + B_n \frac {e^{hy}-1-y} {y^2}$ is continuous, diverges to infinity when $h \rightarrow +\infty$, and its derivative $h \mapsto -x + \frac {B_n} y \lr{e^{hy}-1}$ has a unique zero $h_{\min}$ on $\rset_+^*$ defined by $e^{h_{\min}y}-1 = \frac {xy} {B_n}$.

With $y = \frac x r$ where $r > 0$, combining with \eqref{eq:s_n_step_1} yields \begin{equation*} \PP \lr{S_n \geq x} \leq \sum_{i=1}^n \PP \lr{X_i \geq \frac x r} + e^r \lr{1 + \frac {x^2} {r B_n}}^{-r}. \end{equation*} Considering $\lr{-X_i}_{1 \leq i \leq n}$ provides a similar inequality for $-S_n$, and using the fact that $x > 0$ we deduce \begin{equation*} \PP \lr{\lrav{S_n} \geq x} = \PP \lr{S_n \geq x} + \PP \lr{-S_n \geq x} \leq \sum_{i=1}^n \PP \lr{\lrav{X_i} \geq \frac x r}+ 2 e^r \lr{1 + \frac {x^2} {r B_n}}^{-r}. \end{equation*}

Using the $g$-lemma with $g \colon x \mapsto x^p$ we deduce \begin{align*} \PE \lrb{\lrav{S_n}^p} = p \int_{\rset_+} x^{p-1} \PP\lr{\lrav{S_n} \geq x} \rmd x &\leq \sum_{i=1}^n p \int_{\rset_+} x^{p-1} \PP\lr{\lrav{X_i} \geq x} \rmd x + 2p e^r \int_{\rset_+} \frac {x^{p-1}} {\lr{1 + \frac {x^2} {r B_n}}^r} \rmd x \\ &= r^p \sum_{i=1}^n \PE \lrb{\lrav{X_i}^p} + 2p e^r B_n^{p/2} \int_0^{+\infty} \frac {u^{p/2-1}} {\lr{1+\frac u r}^r} \rmd u \quad \in \rset_+ \cup \lrc{+\infty}, \end{align*} with the change of variables $u = \frac {x^2} {B_n}$. The integral is finite iff $r>p/2$, and we can choose $r = p$ to deduce the Rosenthal inequality: \begin{equation} \label{eq:rosenthal} \PE \lrb{\lrav{S_n}^p} \leq c_p \lr{\sum_{i=1}^n \PE \lrb{\lrav{X_i}^p} + \lr{\sum_{i=1}^n \PE \lrb{X_i^2}}^{p/2}}, \end{equation} where $c_p \eqdef \max(p^p, 2p e^p \int_{\rset_+} \frac {u^{p/2-1}} {\lr{1+\frac u p}^p} \rmd u)$ only depends on $p$. Finally, by Jensen inequality as $p \geq 2$, and by convexity, \begin{equation*} \lr{\frac 1 n \sum_{i=1}^n \PE \lrb{X_i^2}}^{p/2} = \PE \lrb{\frac 1 n \sum_{i=1}^n X_i^2}^{p/2} \leq \PE \lrb{\lr{\frac 1 n \sum_{i=1}^n X_i^2}^{p/2}} \leq \PE \lrb{\frac 1 n \sum_{i=1}^n \lrav{X_i}^p}. \end{equation*}

which together with the Rosenthal inequality \eqref{eq:rosenthal} yields the Marcinkiewicz–Zygmund inequality: \begin{equation*} \PE \lrb{\lrav{\sum_{i=1}^n X_i}^p} \leq C_p \eqsp n^{p/2-1} \sum_{i=1}^n \PE \lrb{\lrav{X_i}^p}, \end{equation*} where $C_p \eqdef 2 c_p$.

Generalized Marcinkiewicz–Zygmund inequality

Let $d \in \nset^*$ and $\norm{\cdot}$ a norm on $\rset^d$. Let $n \in \nset^*$ and $\lr{X_i}_{1 \leq i \leq n}$ independent random variables of $L^p(\rset^d)$ with $2 \leq p < \infty$. Then, \begin{equation*} \mathbb{E}\lrb{\norm{\sum_{i=1}^n \lr{X_i-\mathbb{E}\lrb{X_i}} }^p} \leq C_{p, \norm{}} \times n^{p/2-1} \times \sum_{i=1}^n \mathbb{E}\lrb{\norm{X_i}^p} , \end{equation*} where $C_{p, \norm{}}$ is a constant depending only on $p$ and on the choice of the norm $\norm{\cdot}$.

Click to display ⇲

Click to hide ⇱

First, notice that the result only needs to be proved for centered random variables. Indeed, by convexity, for any random variable $X$, \begin{equation*} \mathbb{E}\lrb{\norm{X-\mathbb{E}\lrb{X}}^p} \leq 2^{p-1} \mathbb{E}\lrb{\norm{X}^p + \norm{\mathbb{E}\lrb{X}}^p} \leq 2^p \mathbb{E}\lrb{\norm{X}^p} . \end{equation*} Moreover, by equivalence of norms in finite dimension, the result only needs to be proved for the norm $\norm{\cdot}_p$ on $\rset^d$. Using the Marcinkiewicz–Zygmund inequality in dimension 1 provides \begin{align*} \mathbb{E}\lrb{\norm{\sum_{i=1}^n X_i }_p^p} &= \mathbb{E}\lrb{\sum_{j=1}^d \lrav{ \sum_{i=1}^n X_i(j) }^p} \\ &= \sum_{j=1}^d \mathbb{E}\lrb{\lrav{ \sum_{i=1}^n X_i(j) }^p} \\ &\leq \sum_{j=1}^d C_p \times n^{p/2-1} \times \sum_{i=1}^n \mathbb{E}\lrb{\lrav{X_i(j)}^p} \\ &= C_p \times n^{p/2-1} \times \sum_{i=1}^n \mathbb{E}\lrb{\sum_{j=1}^d \lrav{X_i(j)}^p} \\ &= C_p \times n^{p/2-1} \times \sum_{i=1}^n \mathbb{E}\lrb{\norm{X_i}_p^p} \eqsp. \end{align*}

Some notation

Reminder of the notation introduced in Fort, Moulines (2003) p.12.

Let $p > 0$, $\lr{X_n}_{n \in \nset}$ a sequence of random variables and $\lr{\alpha_n}_{n \in \nset}$ a sequence of nonzero real numbers. We write $X_n = O_{L^p}(\alpha_n)$ if $\lr{\alpha_n^{-1} X_n}_{n \in \nset}$ is bounded in $L^p$.

$O$-lemma

Let $p > 0$ and $\lr{X_n}_{n \in \nset}$ a sequence of random variables such that $X_n = O_{L^p}(\alpha_n)$ with $\sum_{n=0}^{\infty} \alpha_n^p < \infty$. Then, \begin{equation*} \lr{X_n}_{n \in \nset} \overset{a.s}{\rightarrow} 0. \end{equation*}

Click to display ⇲

Click to hide ⇱

By assumption, there exists $C \in \rset_+^*$ such that for all $n \in \nset$, $\alpha_n^{-1} \norm{X_n}_{L^p} \leq C$.

Let $\epsilon > 0$. By Markov inequality, for all $n \in \nset$, \begin{equation*} \mathbb{P}\lr{\norm{X_n }_p \geq \epsilon} \leq \frac {\PE\lrb{\norm{X_n}_p^p}} {\epsilon^p} = \frac {\norm{X_n}_{L_p}^p} {\epsilon^p} \leq \frac {C^p} {\epsilon^p} \alpha_n^p, \end{equation*} hence \begin{equation*} \sum_{n=0}^{\infty} \mathbb{P}\lr{\norm{X_n }_p \geq \epsilon} \leq \frac {C^p} {\epsilon^p} \sum_{n=0}^{\infty} \alpha_n^p < \infty . \end{equation*} By Borel-Cantelli lemma we deduce that almost surely, $\norm{X_n }_p < \epsilon$ for sufficiently large $n$. That being true for all $\epsilon > 0$, it is true for all $\epsilon = \frac 1 k$ with $k \in \nset^*$, and from a countable intersection of almost sure events $\lr{X_n}_{n \in \nset} \overset{a.s}{\rightarrow} 0$.

Triangular strong law of large numbers

Let $d \in \nset^*$ and $\lr{X_{n,i}}_{1 \leq i \leq m_n, n\in\nset}$ i.i.d. random variables of $\rset^d$ with $\lr{m_n}_{n \in \nset} \in {\nset^*}^{\nset}$. Assume the existence of $p \geq 2$ such that $X_{1,1} \in L^p$ and $\sum_{i=1}^{\infty} m_n^{-p/2} < \infty$. Then, \begin{equation*} \frac 1 {m_n} \sum_{i=1}^{m_n} X_{n,i} \overset{a.s}{\rightarrow} \PE\lr{X_{1,1}}. \end{equation*}

Click to display ⇲

Click to hide ⇱

The $\lr{X_{n,i}}_{1 \leq i \leq m_n, n\in\nset}$ being i.i.d. and in $L^p(\rset^d)$, the generalized Marcinkiewicz–Zygmund inequality yields \begin{equation*} \frac 1 {m_n} \sum_{i=1}^{m_n} X_{n,i} - \PE\lr{X_{1,1}} = O_{L^p}\lr{m_n^{-1/2}}. \end{equation*} As $\sum_{i=1}^{\infty} m_n^{-p/2} < \infty$ by assumption, the $O$-lemma concludes the proof.

Remark: The assumptions of the theorem hold as soon as $m_n = n$ for all $n \in \nset^*$ and $X_{1,1} \in L^p$ with $p>2$.

Remark: For $p=2$, look at Theorem 2.19 of Hall, Heyde Martingal Limit Theory and its application.

Compact strong law of large numbers

Let $\Theta$ be a compact subset of $\rset^d$ with $d \in \nset^*$, $Z$ a measurable space, $\zeta$ a random variable taking its values on $Z$, and $L$ a measurable function defined on $\Theta \times Z$. Define on $\Theta$ the function $\mathcal{L} \colon \theta \mapsto \mathbb{E}\lrb{L(\theta, \zeta)}$, and for all $\theta \in \Theta$ and $n \in \nset$, the Monte-Carlo average \begin{equation*} \hat{\mathcal{L}}^n(\theta) \eqdef \frac 1 {m_n} \sum_{i=1}^{m_n} L(\theta, \zeta_{n,i}), \end{equation*} where $\lr{m_n} \in {\nset^*}^{\nset}$ and $\lr{\zeta_{n,i}}_{1 \leq i \leq m_n}$ are i.i.d. random variables with $\zeta_{n,1} \sim \zeta$.

Assume that:

  1. $\mathcal{L}$ is continuous on $\Theta$,
  2. there exists a measurable function $\Gamma \colon Z \rightarrow \rset_+$ such that a.s., for all $\theta \in \Theta$, $|L(\theta, \zeta)| \leq \Gamma(\zeta) \in L^p$ with $p \geq 2$,
  3. $\sum_{i=1}^{\infty} m_n^{-p/2} < \infty$.

Then, \begin{equation*} \underset{\Theta}{\sup} \lrav{\mathcal{L} - \hat{\mathcal{L}}^n} \overset{a.s.}{\rightarrow} 0. \end{equation*}

Click to display ⇲

Click to hide ⇱

Let $\delta > 0$ and $\theta_0 \in \Theta$. Write \begin{equation*} \mathcal{L} - \hat{\mathcal{L}}^n = \mathcal{L} - \mathcal{L} \lr{\theta_0} + \mathcal{L} \lr{\theta_0} - \hat{\mathcal{L}}^n. \end{equation*} By continuity of $\mathcal{L}$ there exists $\epsilon_1 > 0$ such that $\underset{\Theta \cap \mathbf{B}\lr{\theta_0, \epsilon_1}}{\sup} \lr{\mathcal{L} - \mathcal{L} \lr{\theta_0}} \leq \frac {\delta} 3$. For all $\epsilon > 0$, \begin{align*} \underset{\Theta \cap \mathbf{B}\lr{\theta_0, \epsilon}}{\sup} \lr{\mathcal{L} \lr{\theta_0} - \hat{\mathcal{L}}^n} &= \PE\lrb{L\lr{\theta_0, \zeta}} - \underset{\theta \in \Theta \cap \mathbf{B}\lr{\theta_0, \epsilon}}{\inf} \frac 1 {m_n} \sum_{i=1}^{m_n} L\lr{\theta, \zeta_{n,i}} \\ &\leq \PE\lrb{L\lr{\theta_0, \zeta}} - \frac 1 {m_n} \sum_{i=1}^{m_n} \underset{\theta \in \Theta \cap \mathbf{B}\lr{\theta_0, \epsilon}}{\inf} L\lr{\theta, \zeta_{n,i}} \\ &= \PE\lrb{L\lr{\theta_0, \zeta}} - \PE\lrb{\underset{\theta \in \Theta \cap \mathbf{B}\lr{\theta_0, \epsilon}}{\inf} L\lr{\theta, \zeta}} + \PE\lrb{\underset{\theta \in \Theta \cap \mathbf{B}\lr{\theta_0, \epsilon}}{\inf} L\lr{\theta, \zeta}} - \frac 1 {m_n} \sum_{i=1}^{m_n} \underset{\theta \in \Theta \cap \mathbf{B}\lr{\theta_0, \epsilon}}{\inf} L\lr{\theta, \zeta_{n,i}}. \end{align*} By the monotone convergence theorem, there exists $\epsilon_2 \in (0; \epsilon_1)$ such that \begin{equation*} \PE\lrb{L\lr{\theta_0, \zeta}} - \PE\lrb{\underset{\theta \in \Theta \cap \mathbf{B}\lr{\theta_0, \epsilon_2}}{\inf} L\lr{\theta, \zeta}} \leq \frac {\delta} 3. \end{equation*} We easily prove that a.s., $\lrav{\underset{\theta \in \Theta \cap \mathbf{B}\lr{\theta_0, \epsilon_2}}{\inf} L\lr{\theta, \zeta}} \leq \Gamma(\zeta) \in L^p$. Together with assumption 3. this allows us to apply the triangular strong law of large numbers to $\lr{\underset{\theta \in \Theta \cap \mathbf{B}\lr{\theta_0, \epsilon}}{\inf} L\lr{\theta, \zeta_{n,i}}}_{1\leq i \leq m_n, n\in\nset}$, which provides a.s. the existence of $n_0 \in \nset$ such that for all $n \geq n_0$, \begin{equation*} \PE\lrb{\underset{\theta \in \Theta \cap \mathbf{B}\lr{\theta_0, \epsilon_2}}{\inf} L\lr{\theta, \zeta}} - \frac 1 {m_n} \sum_{i=1}^{m_n} \underset{\theta \in \Theta \cap \mathbf{B}\lr{\theta_0, \epsilon_2}}{\inf} L\lr{\theta, \zeta_{n,i}} \leq \frac {\delta} 3. \end{equation*} Together with the definitions of $\epsilon_1$ and $\epsilon_2$, this yields the existence a.s. of $n_0 \in \nset$ such that for all $n_0 \geq n$, \begin{equation*} \underset{\Theta \cap \mathbf{B}\lr{\theta_0, \epsilon_2}}{\sup} \lr{\mathcal{L} - \hat{\mathcal{L}}^n} \leq \underset{\Theta \cap \mathbf{B}\lr{\theta_0, \epsilon_1}}{\sup} \lr{\mathcal{L} - \mathcal{L}(\theta_0)} + \underset{\Theta \cap \mathbf{B}\lr{\theta_0, \epsilon_2}}{\sup} \lr{\mathcal{L}(\theta_0) - \hat{\mathcal{L}}^n} \leq \delta. \end{equation*} By compacity of $\Theta \subset \underset{\theta \in \Theta}{\cup} \mathbf{B}\lr{\theta, \epsilon_2(\theta)}$, we can extract a finite subcover $\underset{1 \leq i \leq I}{\cup} \mathbf{B}\lr{\theta_i, \epsilon_2(\theta_i)}$ of $\Theta$ with $I \in \nset$. By finite intersection of almost sure events, there exists a.s. $n_0 \eqdef \max\lr{n_0(\theta_1), \dots, n_0(\theta_I)}$ such that for all $n \geq n_0$, \begin{equation*} \underset{\Theta}{\sup} \lr{\mathcal{L} - \hat{\mathcal{L}}^n} = \underset{1 \leq i \leq I}{\max} \underset{\Theta \cap \mathbf{B}\lr{\theta_i, \epsilon_2(\theta_i)}}{\sup} \lr{\mathcal{L} - \hat{\mathcal{L}}^n} \leq \delta. \end{equation*} That being true for all $\delta = \frac 1 k$ with $k \in \nset^*$, by countable intersection of almost sure events, $\max\lr{0, \eqsp \underset{\Theta}{\sup} \lr{\mathcal{L} - \hat{\mathcal{L}}^n}} \overset{a.s.}{\rightarrow} 0$. The same reasoning with $L = - L$ provides $\underset{\Theta}{\sup} \lrav{\mathcal{L} - \hat{\mathcal{L}}^n} \overset{a.s.}{\rightarrow} 0$.

Remark. If $L$ is continuous with respect to the first variable $\theta$, by Lebesgue's dominated convergence theorem under assumption 2. the function $\mathcal{L}$ is continuous (i.e. assumption 1. is verified).

world/marcinkiewicz.1642239717.txt.gz · Last modified: 2022/03/16 01:36 (external edit)