Welcome to Randal Douc's wiki

A collaborative site on maths but not only!

User Tools

Site Tools


world:cours:montecarlo

$$ \newcommand{\arginf}{\mathrm{arginf}} \newcommand{\argmin}{\mathrm{argmin}} \newcommand{\argmax}{\mathrm{argmax}} \newcommand{\asconv}[1]{\stackrel{#1-a.s.}{\rightarrow}} \newcommand{\Aset}{\mathsf{A}} \newcommand{\b}[1]{{\mathbf{#1}}} \newcommand{\ball}[1]{\mathsf{B}(#1)} \newcommand{\bbQ}{{\mathbb Q}} \newcommand{\bproof}{\textbf{Proof :}\quad} \newcommand{\bmuf}[2]{b_{#1,#2}} \newcommand{\card}{\mathrm{card}} \newcommand{\chunk}[3]{{#1}_{#2:#3}} \newcommand{\condtrans}[3]{p_{#1}(#2|#3)} \newcommand{\convprob}[1]{\stackrel{#1-\text{prob}}{\rightarrow}} \newcommand{\Cov}{\mathbb{C}\mathrm{ov}} \newcommand{\cro}[1]{\langle #1 \rangle} \newcommand{\CPE}[2]{\PE\lr{#1| #2}} \renewcommand{\det}{\mathrm{det}} \newcommand{\dimlabel}{\mathsf{m}} \newcommand{\dimU}{\mathsf{q}} \newcommand{\dimX}{\mathsf{d}} \newcommand{\dimY}{\mathsf{p}} \newcommand{\dlim}{\Rightarrow} \newcommand{\e}[1]{{\left\lfloor #1 \right\rfloor}} \newcommand{\eproof}{\quad \Box} \newcommand{\eremark}{</WRAP>} \newcommand{\eqdef}{:=} \newcommand{\eqlaw}{\stackrel{\mathcal{L}}{=}} \newcommand{\eqsp}{\;} \newcommand{\Eset}{ {\mathsf E}} \newcommand{\esssup}{\mathrm{essup}} \newcommand{\fr}[1]{{\left\langle #1 \right\rangle}} \newcommand{\falph}{f} \renewcommand{\geq}{\geqslant} \newcommand{\hchi}{\hat \chi} \newcommand{\Hset}{\mathsf{H}} \newcommand{\Id}{\mathrm{Id}} \newcommand{\img}{\text{Im}} \newcommand{\indi}[1]{\mathbf{1}_{#1}} \newcommand{\indiacc}[1]{\mathbf{1}_{\{#1\}}} \newcommand{\indin}[1]{\mathbf{1}\{#1\}} \newcommand{\itemm}{\quad \quad \blacktriangleright \;} \newcommand{\jointtrans}[3]{p_{#1}(#2,#3)} \newcommand{\ker}{\text{Ker}} \newcommand{\klbck}[2]{\mathrm{K}\lr{#1||#2}} \newcommand{\law}{\mathcal{L}} \newcommand{\labelinit}{\pi} \newcommand{\labelkernel}{Q} \renewcommand{\leq}{\leqslant} \newcommand{\lone}{\mathsf{L}_1} \newcommand{\lrav}[1]{\left|#1 \right|} \newcommand{\lr}[1]{\left(#1 \right)} \newcommand{\lrb}[1]{\left[#1 \right]} \newcommand{\lrc}[1]{\left\{#1 \right\}} \newcommand{\lrcb}[1]{\left\{#1 \right\}} \newcommand{\ltwo}[1]{\PE^{1/2}\lrb{\lrcb{#1}^2}} \newcommand{\Ltwo}{\mathrm{L}^2} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mcbb}{\mathcal B} \newcommand{\mcf}{\mathcal{F}} \newcommand{\meas}[1]{\mathrm{M}_{#1}} \newcommand{\norm}[1]{\left\|#1\right\|} \newcommand{\normmat}[1]{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \newcommand{\nset}{\mathbb N} \newcommand{\N}{\mathcal{N}} \newcommand{\one}{\mathsf{1}} \newcommand{\PE}{\mathbb E} \newcommand{\pminfty}{_{-\infty}^\infty} \newcommand{\PP}{\mathbb P} \newcommand{\projorth}[1]{\mathsf{P}^\perp_{#1}} \newcommand{\Psif}{\Psi_f} \newcommand{\pscal}[2]{\langle #1,#2\rangle} \newcommand{\pscal}[2]{\langle #1,#2\rangle} \newcommand{\psconv}{\stackrel{\PP-a.s.}{\rightarrow}} \newcommand{\qset}{\mathbb Q} \newcommand{\revcondtrans}[3]{q_{#1}(#2|#3)} \newcommand{\rmd}{\mathrm d} \newcommand{\rme}{\mathrm e} \newcommand{\rmi}{\mathrm i} \newcommand{\Rset}{\mathbb{R}} \newcommand{\rset}{\mathbb{R}} \newcommand{\rti}{\sigma} \newcommand{\section}[1]{==== #1 ====} \newcommand{\seq}[2]{\lrc{#1\eqsp: \eqsp #2}} \newcommand{\set}[2]{\lrc{#1\eqsp: \eqsp #2}} \newcommand{\sg}{\mathrm{sgn}} \newcommand{\supnorm}[1]{\left\|#1\right\|_{\infty}} \newcommand{\thv}{{\theta_\star}} \newcommand{\tmu}{ {\tilde{\mu}}} \newcommand{\Tset}{ {\mathsf{T}}} \newcommand{\Tsigma}{ {\mathcal{T}}} \newcommand{\ttheta}{{\tilde \theta}} \newcommand{\tv}[1]{\left\|#1\right\|_{\mathrm{TV}}} \newcommand{\unif}{\mathrm{Unif}} \newcommand{\weaklim}[1]{\stackrel{\mathcal{L}_{#1}}{\rightsquigarrow}} \newcommand{\Xset}{{\mathsf X}} \newcommand{\Xsigma}{\mathcal X} \newcommand{\Yset}{{\mathsf Y}} \newcommand{\Ysigma}{\mathcal Y} \newcommand{\Var}{\mathbb{V}\mathrm{ar}} \newcommand{\zset}{\mathbb{Z}} \newcommand{\Zset}{\mathsf{Z}} $$

2017/10/07 23:39 · douc

Monte Carlo and Advanced simulation methods

Introduction

Monte Carlo methods are the main ingredients of many numerical algorithms widely used in Econometrics, Finance, Biology and more generally in all domains that are linked with Statistics and Machine Learning. In Bayesian Statistics for example, inference on the model is done from an a priori knowledge on the parameter and from a given family of observations. To obtain numerical expressions of any quantity expressed as an expectation of a particular function of interest under the a posteriori distribution, some efficient computational algorithms are then crucially requested.

In this course, we provide several ways of sampling either exactly or approximately from a target distribution. The performance of these algorithms are usually measured in terms of the variance of the error and we also provide classical variants that allow to reduce, sometimes dramatically, the variance of the error, enhancing therefore the quality of the approximation. Throughout the course, many illustrations in Python help to grasp the different introduced algorithms.

At the end of this course, a student will be able to

  • propose several ways for providing approximate samples from a target distribution.
  • understand the basis of Monte Carlo methods and identify the main factors that influence the quality of the approximations.
  • propose some variants that reduce the variance of the error.

Prerequisite: the students must have followed before a course in probability. Basis on statistics is a plus but it is not mandatory.

Instructions

This 24H course will be given at VNUHCM in june 2023. It will be based on the following lecture notes (it will be updated regularly):

Computer sessions and written notes

Previsional programme

Day Date 8.30-10.00 10.15-11.45 13.00-14.30
1 Monday, 19 june Lecture Tutorial Computer Session
2 Tuesday, 20 june Lecture Tutorial Computer Session
3 Wednesday, 21 june Lecture Tutorial Computer session
4 Thursday, 22 june Lecture Tutorial Computer Session
5 Friday, 23 june Lecture Tutorial
6 Satursday, 24 june Lecture Tutorial
  • Day 1: 4H30
    • Lecture 1H30: Recap on Measures, Integration, Random Variables, independence, LLN.
    • Tutorial 1H30: Exercises
    • Computer 1H30: Histograms and sampling of random variables. LLN.
  • Day 2: 4H30
    • Lecture 1H30: Central Limit Theorem, Confidence interval, Slutsky's lemma.
    • Tutorial 1H30: Exact sampling, quantile function. Rejection sampling: exercise.
    • Computer 1H30. CLT and confidence intervals. Exact sampling.
  • Day 3: 3H
    • Lecture 1H30: Approximate sampling, Importance sampling. Monte Carlo by Markov chains.
    • Tutorial 1H30: Approximate sampling, Markov chains, exercices.
  • Day 4: 4H30
    • Lecture 1H30: Variance reduction (I): antithetic and control variates.
    • Tutorial 1H30: Markov chains and antithetic variables.
    • Computer 1H30: Importance sampling and MCMC.
  • Day 5: 4.30h
    • Lecture 1H30: Variance reduction (II): conditioning, stratified sampling
    • Tutorial 1H30: Exercises on variance reduction.
    • Computer 1H30: Variance reduction (II)
  • Day 6: 3H
    • Lecture 1H30: Other approximate sampling methods (Variational Inference).
    • Tutorial 1H30: Exercise.
world/cours/montecarlo.txt · Last modified: 2023/06/24 04:47 by rdouc