A collaborative site on maths but not only!
You've loaded an old revision of the document! If you save it, you will create a new version with this data.
xxxxxxxxxx
{{page>:defs}}
{{tag> bounds}}
===== Bounds on the tail of the normal distribution. =====
If $X_1$ follows a standard normal distribution, then
$$
\PP(X_1>x)=\int_x^\infty \frac{1}{\sqrt{2\pi}} e^{-t^2/2} dt \leq \int_x^\infty \frac{t}{x} \frac{1}{\sqrt{2\pi}} e^{-t^2/2} dt =\frac{e^{-x^2/2}}{x\sqrt{2\pi}}
===== Bounds on the tail of a max distribution =====
Assume that $(X_i)$ are iid. Denote $M_n=\max_{i\in [1:n]} X_{i}$.
\PP(M_n>x)\leq \sum_{i=1}^n \PP(X_i>x) \leq n \PP(X_1>x)
In the case where the distribution of $X_1$ is standard normal, then
\begin{equation} \label{eq:bound:max}
\PP(M_n>x) \leq n \frac{e^{-x^2/2}}{x\sqrt{2\pi}}
\end{equation}
The bound is not bad in $x$ but not very nice in $n$.
===== Bounds on the moments of a max distribution =====
Let $(X_i)$ be iid standard gaussian random variables.
Then, by Jensen's inequality, for all $\lambda \in (0,1/2)$,
\rme^{\lambda \PE[M_n^2]} \leq \PE[\rme^{\lambda M_n^2}] \leq n \PE[\rme^{\lambda X^2}]=n \int \frac{\rme^{-\frac{1-2\lambda}{2}x^2}}{\sqrt{2\pi}} \rmd x=n \lr{1-2\lambda}^{-1/2}
Taking the $\log$ and dividing by $\lambda$, we get:
\PE[M_n^2] \leq \frac{\log n-2^{-1}\log(1-2\lambda)}{\lambda}
Choosing $\lambda$ such that $\log n=-2^{-1}\log(1-2\lambda)$ yields for $n \geq 2$,
\PE[M_n^2] \leq \frac{4\log n}{1-n^{-2}}= 4 \log n +\frac{4 \log n}{n^2-1} \leq 4\log n +1
With a similar argument, we can show that
\PE[M_n] \leq \sqrt{2 \log n}
Finally, a Markov inequality yields for all $x>0$
\PP(M_n > x) \leq \frac{\sqrt{2 \log n}}{x}
which is better than the previous bound \eqref{eq:bound:max} wrt $n$ but dramatic wrt $x$...
===== Another inequality which can be useful for max distribution =====
Let $Z$ be a non-negative random variable on a probability space $(\Omega,\mcf,\PP)$ and assume that there exists a constant $c> 1$ such that for all $t\geq 0$,
\PP(Z>t)\leq c\rme^{-2nt^2}
Then,
\PE[Z] \leq \sqrt{ \frac{\log c +1}{2n} }
$\bproof$
\begin{align*}
\PE[Z^2]&=\int_0^\infty \PP(Z>\sqrt{t})\rmd t \leq \int_0^\infty 1\wedge \lr{c\rme^{-2nt} } \rmd t =\frac{\log c}{2n} + \int_{\frac{\log c}{2n}} ^\infty c\rme^{-2nt} \rmd t =\frac{\log c}{2n} +\frac{1}{2n}
\end{align*}
The proof is completed by noting that $\PE[Z] \leq \sqrt{\PE[Z^2]}$
$\eproof$
==== Some comments on the approach ====
It may seem a bit convoluted to bound $\PE[Z]$ using a bound of $\PE[Z^2]$. I tried using a direct proof.
\PE[Z]&=\int_0^\infty \PP(Z>t)\rmd t \leq \int_0^\infty 1\wedge \lr{c\rme^{-2nt^2} } \rmd t =\sqrt{\frac{\log c}{2n}} + \int_{\sqrt{\frac{\log c}{2n}}} ^\infty c\rme^{-2nt^2} \rmd t \\
&\leq \sqrt{\frac{\log c}{2n}} +\int_{\sqrt{\frac{\log c}{2n}}} ^\infty \frac{t}{\sqrt{\frac{\log c}{2n}}} c\rme^{-2nt^2} \rmd t = \sqrt{\frac{\log c}{2n}} +\lrb{\frac{c\rme^{-2nt^2}}{-4n\sqrt{\frac{\log c}{2n}}} }_{\sqrt{\frac{\log c}{2n}}}^\infty \\
&=\frac{1}{\sqrt{2n}} \lr{\sqrt{\log c}+ \frac{1}{2 \sqrt{\log c}}}
The bound is less sharp because on the second line, we only apply a rough bound on the survival function of a Gaussian distribution. And not surprisingly, the resulting bound is less sharp that the previous one because: $\sqrt{a+1} \leq \sqrt{a}+\frac{1}{2\sqrt{a}}$.
==== Maximal Kolmogorov inequality ====
Let \((M_k)_{k\in\nset}\) be a square integrable \((\mcf_k)_{k\in\nset}\)-martingale. Then,
\begin{equation}
\PP\lr{\sup_{k=1}^n |M_k| \geq \alpha} \leq \frac{\PE[M_n^2]}{\alpha^2}
Let \(\sigma=\inf\set{k\geq 1}{|M_k| \geq \alpha}\) with the convention that $\inf \emptyset=\infty$.
\PP\lr{\sup_{k=1}^n |M_k| \geq \alpha} =\PP(\sigma \leq n)=\PP\lr{|M_{\sigma \wedge n}| \geq \alpha} \leq \frac{\PE[M^2_{\sigma \wedge n}]}{\alpha^2} \label{eq:kolm:one}
We first rewrite the rhs using that \((M_{\sigma \wedge k})_{k\in\nset}\) is also a \((\mcf_k)_{k\in\nset}\)-martingale. To see this last property, write \(M_{\sigma \wedge k}=\indiacc{k\leq \sigma} M_{k}+\indiacc{k> \sigma}M_\sigma\), which implies
\begin{equation*}
\PE\lrb{M_{\sigma \wedge k}|\mcf_{k-1}}=\indiacc{k\leq \sigma}\underbrace{\PE\lrb{M_{k}|\mcf_{k-1}}}_{M_{k-1}}+\indiacc{k> \sigma} M_\sigma=M_{\sigma \wedge (k-1)}
\end{equation*}
Now, the rhs of \eqref{eq:kolm:one} can be written using
{\PE[M^2_{\sigma \wedge n}]}&= {\PE[M_1^2]+\sum_{k=1}^{n-1} \PE\lrb{\lr{M_{\sigma \wedge (k+1)} - M_{\sigma \wedge k} }^2}}\\
&={\PE[M_1^2]+\sum_{k=1}^{n-1} \PE\lrb{\lr{M_{k+1} - M_{k} }^2 \indiacc{\sigma > k}}}\\
&\leq {\PE[M_1^2]+\sum_{k=1}^{n-1} \PE\lrb{\lr{M_{k+1} - M_{k} }^2 }}={\PE[M_n^2]}
==== Doob's inequalities ====
<WRAP center round tip 90%>
* **(i)** Let \((X_k)_{k\in\nset}\) be a non-negative \((\mcf_k)_{k\in\nset}\)-supermartingale. Then,
\PP\lr{\sup_{k=1}^\infty X_k \geq \epsilon} \leq \PE[X_0]/\epsilon
* **(ii)** Let \((X_k)_{k\in\nset}\) be a non-negative \((\mcf_k)_{k\in\nset}\)-submartingale. Then,
\PP\lr{\max_{k=1}^n X_k \geq \epsilon} \leq \PE[X_n]/\epsilon
</WRAP>
=== Proof ===
We prove **(ii)**. Define $\tau_\epsilon=\inf \set{k\in \nset}{X_k \geq \epsilon}$ with the convention that $\inf \emptyset =\infty$. Then,
\epsilon \PP\lr{\max_{k=1}^n X_k \geq \epsilon}= \epsilon \PP\lr{\tau_\epsilon \leq n}&\leq \PE[X_{\tau_\epsilon} \indiacc{\tau_\epsilon \leq n}]=\sum_{k=0}^n \PE[X_k \indiacc{\tau_\epsilon=k}]\\
&\leq \sum_{k=0}^n \PE[\PE[X_n|\mcf_k] \indiacc{\tau_\epsilon=k}]=\sum_{k=0}^n \PE[X_n \indiacc{\tau_\epsilon=k}]=\PE[X_n \indiacc{\tau_\epsilon\leq n}]