Welcome to Randal Douc's wiki

A collaborative site on maths but not only!

User Tools

Site Tools


world:partition

$$ \newcommand{\arginf}{\mathrm{arginf}} \newcommand{\argmin}{\mathrm{argmin}} \newcommand{\argmax}{\mathrm{argmax}} \newcommand{\asconv}[1]{\stackrel{#1-a.s.}{\rightarrow}} \newcommand{\Aset}{\mathsf{A}} \newcommand{\b}[1]{{\mathbf{#1}}} \newcommand{\ball}[1]{\mathsf{B}(#1)} \newcommand{\bbQ}{{\mathbb Q}} \newcommand{\bproof}{\textbf{Proof :}\quad} \newcommand{\bmuf}[2]{b_{#1,#2}} \newcommand{\card}{\mathrm{card}} \newcommand{\chunk}[3]{{#1}_{#2:#3}} \newcommand{\condtrans}[3]{p_{#1}(#2|#3)} \newcommand{\convprob}[1]{\stackrel{#1-\text{prob}}{\rightarrow}} \newcommand{\Cov}{\mathbb{C}\mathrm{ov}} \newcommand{\cro}[1]{\langle #1 \rangle} \newcommand{\CPE}[2]{\PE\lr{#1| #2}} \renewcommand{\det}{\mathrm{det}} \newcommand{\dimlabel}{\mathsf{m}} \newcommand{\dimU}{\mathsf{q}} \newcommand{\dimX}{\mathsf{d}} \newcommand{\dimY}{\mathsf{p}} \newcommand{\dlim}{\Rightarrow} \newcommand{\e}[1]{{\left\lfloor #1 \right\rfloor}} \newcommand{\eproof}{\quad \Box} \newcommand{\eremark}{</WRAP>} \newcommand{\eqdef}{:=} \newcommand{\eqlaw}{\stackrel{\mathcal{L}}{=}} \newcommand{\eqsp}{\;} \newcommand{\Eset}{ {\mathsf E}} \newcommand{\esssup}{\mathrm{essup}} \newcommand{\fr}[1]{{\left\langle #1 \right\rangle}} \newcommand{\falph}{f} \renewcommand{\geq}{\geqslant} \newcommand{\hchi}{\hat \chi} \newcommand{\Hset}{\mathsf{H}} \newcommand{\Id}{\mathrm{Id}} \newcommand{\img}{\text{Im}} \newcommand{\indi}[1]{\mathbf{1}_{#1}} \newcommand{\indiacc}[1]{\mathbf{1}_{\{#1\}}} \newcommand{\indin}[1]{\mathbf{1}\{#1\}} \newcommand{\itemm}{\quad \quad \blacktriangleright \;} \newcommand{\jointtrans}[3]{p_{#1}(#2,#3)} \newcommand{\ker}{\text{Ker}} \newcommand{\klbck}[2]{\mathrm{K}\lr{#1||#2}} \newcommand{\law}{\mathcal{L}} \newcommand{\labelinit}{\pi} \newcommand{\labelkernel}{Q} \renewcommand{\leq}{\leqslant} \newcommand{\lone}{\mathsf{L}_1} \newcommand{\lrav}[1]{\left|#1 \right|} \newcommand{\lr}[1]{\left(#1 \right)} \newcommand{\lrb}[1]{\left[#1 \right]} \newcommand{\lrc}[1]{\left\{#1 \right\}} \newcommand{\lrcb}[1]{\left\{#1 \right\}} \newcommand{\ltwo}[1]{\PE^{1/2}\lrb{\lrcb{#1}^2}} \newcommand{\Ltwo}{\mathrm{L}^2} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mcbb}{\mathcal B} \newcommand{\mcf}{\mathcal{F}} \newcommand{\meas}[1]{\mathrm{M}_{#1}} \newcommand{\norm}[1]{\left\|#1\right\|} \newcommand{\normmat}[1]{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \newcommand{\nset}{\mathbb N} \newcommand{\N}{\mathcal{N}} \newcommand{\one}{\mathsf{1}} \newcommand{\PE}{\mathbb E} \newcommand{\pminfty}{_{-\infty}^\infty} \newcommand{\PP}{\mathbb P} \newcommand{\projorth}[1]{\mathsf{P}^\perp_{#1}} \newcommand{\Psif}{\Psi_f} \newcommand{\pscal}[2]{\langle #1,#2\rangle} \newcommand{\pscal}[2]{\langle #1,#2\rangle} \newcommand{\psconv}{\stackrel{\PP-a.s.}{\rightarrow}} \newcommand{\qset}{\mathbb Q} \newcommand{\revcondtrans}[3]{q_{#1}(#2|#3)} \newcommand{\rmd}{\mathrm d} \newcommand{\rme}{\mathrm e} \newcommand{\rmi}{\mathrm i} \newcommand{\Rset}{\mathbb{R}} \newcommand{\rset}{\mathbb{R}} \newcommand{\rti}{\sigma} \newcommand{\section}[1]{==== #1 ====} \newcommand{\seq}[2]{\lrc{#1\eqsp: \eqsp #2}} \newcommand{\set}[2]{\lrc{#1\eqsp: \eqsp #2}} \newcommand{\sg}{\mathrm{sgn}} \newcommand{\supnorm}[1]{\left\|#1\right\|_{\infty}} \newcommand{\thv}{{\theta_\star}} \newcommand{\tmu}{ {\tilde{\mu}}} \newcommand{\Tset}{ {\mathsf{T}}} \newcommand{\Tsigma}{ {\mathcal{T}}} \newcommand{\ttheta}{{\tilde \theta}} \newcommand{\tv}[1]{\left\|#1\right\|_{\mathrm{TV}}} \newcommand{\unif}{\mathrm{Unif}} \newcommand{\weaklim}[1]{\stackrel{\mathcal{L}_{#1}}{\rightsquigarrow}} \newcommand{\Xset}{{\mathsf X}} \newcommand{\Xsigma}{\mathcal X} \newcommand{\Yset}{{\mathsf Y}} \newcommand{\Ysigma}{\mathcal Y} \newcommand{\Var}{\mathbb{V}\mathrm{ar}} \newcommand{\zset}{\mathbb{Z}} \newcommand{\Zset}{\mathsf{Z}} $$

2017/10/07 23:39 · douc

A useful result for the consistency of Decision Trees.

Assume that, given iid $(X_i)_{i\in [1:n]}$ taking values in $[0,1]^d$ and some extra random variable $\Theta$, you build a decision tree with cells $A_\ell$. Note that $A_\ell$ are deterministically obtained from $(X_i)_{i\in [1:n]}$ and $\Theta$ but we do not stress the dependence on these variables in the notation.

Assume that $Y_i=r(X_i)+\epsilon_i$ where $(\epsilon_i)_{i \in [1:n]}$ are iid, centered with finite variance $\sigma^2$ and the $(\epsilon_i)$ are independent from the $(X_i)$ and $\Theta$. In what follows $(X,Y) \eqlaw (X_1,Y_1)$. Denote $$ \hat r_n(x, \Theta) = \sum_{i=1}^n w_{ni} Y_i, \quad \mbox{where} \quad w_{ni}\eqdef \frac{\indiacc{X_i \in A_n(x, \Theta)}}{N_n(x, \Theta)} $$ where $A_n(x,\Theta)$ is the cell containing $x$ and $N_n(x, \Theta)=\card\set{i\in [1:n]}{X_i \in A_n(x,\Theta)}$. Define the diameter of any cell $A$ by $$ \textrm{diam}(A) = \sup_{x,z \in A} \|x - z\|_2. $$

Stone's theorem Assume that

  • $\textrm{diam}(A_n(X,\Theta)) \to 0$ in probability, as $n \to \infty$.
  • $N_n(X, \Theta) \to \infty$ in probability, as $n \to \infty$.

Then, $$ \lim_{n\to \infty}\PE\lrb{(\hat r_n(X, \Theta)- r(X))^2} = 0, $$

$\bproof$ The proof is based on the following lemma:

Lemma Let $(Z_n)$ be a sequence of random variables that converge to $0$ in probability and take values on $\Zset$. Assume that $F$ is a bounded function on $\Zset$, which is continuous at $0$. Then $\PE[F(Z_n)] \to 0$.

Proof of the Lemma: Write $$ \PE[F(Z_n)]=\PE[F(Z_n)\lrcb{\indi{|Z_n| >\epsilon}+\indi{|Z_n| \leq \epsilon}}]\leq \|F\|_\infty \PP(|Z_n|>\epsilon) + \sup_{|z| \leq \epsilon} F(z) $$ Thus, $\limsup_n \PE[F(Z_n)] \leq \sup_{|z| \leq \epsilon} F(z)$ and since $\epsilon$ is arbitrary, this concludes the proof. $\eproof$

Denote $\Delta(\rho)=\sup_{\|u-v\| \leq \rho} |r(u)-r(v)|$ and since $r$ is continuous on the compact set $[0,1]^d$, it is bounded and uniformly continuous. Therefore, $\Delta$ is a bounded function that converges to $0$ as $\rho \to 0$. Then, denoting $\rho_n(X,\Theta)=\textrm{diam} A_n(X,\Theta) \in [0,\sqrt{d}]$, we have by the triangular inequality, \begin{align*} \ltwo{\hat r_n(X, \Theta)- r(X)} & \leq \ltwo{\sum_{i=1}^{n} w_{ni} [r(X_i)-r(X)]} + \ltwo{\sum_{i=1}^{n} w_{ni} \epsilon_i}\\ & \leq \ltwo{\Delta(\rho_n(X,\Theta))}+\sigma \PE^{1/2}\lrb{\sum_{i=1}^{n} w^2_{ni}}\\ & \leq \ltwo{\Delta(\rho_n(X,\Theta))}+\sigma\PE^{1/2}\lrb{\frac{1}{N_n(X,\Theta)}\underbrace{\sum_{i=1}^{n} w_{ni}}_{=1}} \end{align*} The proof follows by applying the lemma to the random variables $Z_n=\rho_n(X,\Theta)$ on $[0,\sqrt{d}]$ with the function $\Delta$ that is bounded on $[0,\sqrt{d}]$ and continuous at $0$ and then to the random variables $Z_n=1/N_n(X,\Theta)$ taking values on $[0,1]$ with the bounded function $x\mapsto x$ on $[0,1]$ which is continuous at $0$. $\eproof$

world/partition.txt · Last modified: 2023/10/18 16:52 by rdouc