{{page>:defs}} ====== Leave One Out Cross Validation (LOOCV) and Cook's distance ====== Denote the complete data set by $X \in \rset^{n \times p}$ and $y \in \rset^n$ and the leave-one-out data set by $X_{-i} \in \rset^{(n-1) \times p}$ and $y_{-i} \in \rset^{n-1}$. Compute the associated full and leave-one-out regression fits \begin{align*} \hat \beta&=(X^T X)^{-1} X^T y\\ \hat \beta_{-i}&=(X_{-i}^T X_{-i})^{-1} X_{-i}^T y_{-i}\\ H&=X (X^T X)^{-1} X^T\\ H_{-i}&=X_{-i} (X_{-i}^T X_{-i})^{-1} X_{-i}^T \end{align*} Denote by $x_i$ the row vectors of $X$ and let $\hat y_i=x_i \hat \beta$ and $\hat y_{-i}=x_i \hat \beta_{-i}$ be the fitted values at $x_i$ when using all the data and when leaving out point $x_i$ respectively. **__Fundamental Lemma__** For all $i \in \{1,\ldots,n\}$, \begin{equation} \label{eq:fond} y_i -\hat y_{-i}=\frac{y_i-\hat y_i}{1-H_{ii}} \end{equation} This allows us to compute the leave-one-out cross-validation error for any leave-one-out data set using only a single fit, obtained from all the data. === Proof === The trick is to consider the augmented data set including the point $(x_i,\hat y _{-i})$. Define $\tilde y$ the vector $y$ where the i-th component $y_i$ is replaced by $\hat y_{-i}$ and note that for every $\beta \in \rset^p$, $$ \sum_{j} (\tilde y_j -x_j^T \beta)^2 \geq \sum_{j \neq i} (y_j -x_j^T \beta)^2 \geq \sum_{j \neq i} (y_j -x_j^T \hat \beta_{-i})^2 $$ with equality if $\beta=\hat \beta_{-i}$. This shows that $\hat \beta_{-i}$ is obtained by the regression of $\tilde y$ wrt the predictors $X$ (this is the **TRICK**!!!!). Therefore since by definition $\hat{y}_{-i}= x_i \hat \beta_{-i}$, \begin{align*} \hat{y}_{-i} &= x_i \hat \beta_{-i}=(H \tilde y)_i=\sum_{j\neq i}H_{ij}y_{j}+H_{ii}\hat{y}_{-i}= \hat{y}_{i}-H_{ii}y_{i}+H_{ii}\hat{y}_{-i} \end{align*} Rearranging the terms, we get \eqref{eq:fond}. $\eproof$ As a straightforward consequence of \eqref{eq:fond}, the **Leave-One-Out Cross Validation** coefficient can be easily written: \begin{align*} & LOOCV= n^{-1} \sum_{i=1}^n (y_i - \hat y_{-i})^2=n^{-1} \sum_{i=1}^n \frac{(y_i - \hat y_i)^2}{(1-H_{ii})^2} \end{align*} ===== A consequence of the fundamental Lemma on the Cook's distance===== __**Lemma**__ We have \begin{align} &\hat \beta_{-i}=\hat \beta-\frac{(X^TX)^{-1} x_i^T (y_i-\hat y_{i})}{1-H_{ii}} \label{eq:fond:2}\\ &\label{eq:sigma} (n-p-1) \hat \sigma_{-i}^2=(n-p) \hat \sigma^2- \frac{(y_i-\hat y_i)^2}{1-H_{ii}} \end{align} The last item of the lemma is not needed for establishing the expression of the Cook's distance. Still it gives an expression that allows to compare the error by taking into account the $i$-th observation or not. === Proof === * (i). Denote $\epsilon_i\in \rset^n$ with null entries except for the i-th coefficient which is equal to 1. Note that $X^T \epsilon_i=x_i^T$. Then, $$ \hat \beta_{-i}=(X^TX)^{-1} X^T \tilde y=(X^TX)^{-1} X^T (y-\epsilon_i(y_i-\hat y_{-i})) =\hat \beta-(X^TX)^{-1} x_i^T (y_i-\hat y_{-i})=\hat \beta-\frac{(X^TX)^{-1} x_i^T (y_i-\hat y_{i})}{1-H_{ii}} $$ This completes the proof of \eqref{eq:fond:2}. * (ii). Denote $u_i=y_i-\hat y_i$, $u_{-i}=y_i-\hat y_{-i}$ and $h_i=H_{ii}$. We now write \begin{align*} \|P_{\img X^\perp}\tilde y\|^2&= \tilde y^T P_{\img X^\perp} \tilde y= (y-u_{-i} \epsilon_i)^T P_{\img X^\perp}(y-u_{-i} \epsilon_i)=\|P_{\img X^\perp} y\|^2+u_{-i}^2 \epsilon_i^T P_{\img X^\perp} \epsilon_i -2 u_{-i}\epsilon_i^T P_{\img X^\perp} y \\ & = \|P_{\img X^\perp} y\|^2+ \frac{u_i^2}{(1-h_i)^2} (1-h_i)-2 \frac{u_i}{1-h_i} \epsilon_i^T(y-\hat y)= \|P_{\img X^\perp} y\|^2-\frac{u_i^2}{1-h_i} \end{align*} which concludes the proof of \eqref{eq:sigma}. $\eproof$ This in turn, allows to express easily the **Cook's distance** (from \eqref{eq:fond:2}), \begin{align*} D_i=\frac{\|X\hat \beta-X\hat \beta_{-i}\|^2}{p \hat \sigma^2}=\frac{H_{ii}}{p(1-H_{ii})} r_i^2 \quad \mbox{where} \quad r_i=\frac{y_i-\hat y_i}{\hat \sigma \sqrt{1-H_{ii}}}\,, \quad \hat \sigma^2=\frac{RSS}{n-p}=\frac{\sum_{i=1}^n (y_i - \hat y_i)^2}{n-p} \end{align*}