Numerical Algorithm: they are used to draw samples

Independent sampling: need to know the form of posterior distribution, directly or indirectly

dependent sampling: based on theta(i-1) and long-run distirbution is the reflection of true distributional space

click to edit

simple MC sampling

importance sampling

frequentist methods applied for estimating mean, 1. strong law of large numbers 2. averaging out E(g(theta) formula

CLT: confidence interval --- normal distribution if M is large enough

as M increases, the accuracy increases

frequentist methods as well, add weights to simple MC sampling

same condition to invoke CLT

the standard error formula is different. the accuracy depends on both M and weights

Gibbs Sampling: need to know full conditional: converge to joint posterior.

dependent sampling works because of the Markov chain properties: the equilibrium distribution reflect the true distribution as M goes to infinity

estimating marginal pdf: for theta1

using theta 1 draws to do kernel density estimation

using theta 2 draws to do R-B estimation

Giddy Gibbs: since we don't know full conditionals, then Gibbs sampling is off the table. but we know the kernel of full conditionals, so we use deterministic method to estimate the normalising constanta c hat. then get the estimation of interested distribution --- CDF hat, use CDF hat to make draws from the approximate full conditionals.

MH: add an extra sub-chain to the Gibbs sampling, need to know the acceptance rate.

Algorithm diagnosis

standard error for dependent sampling formula: part 6 ---- CLT still apply.

measuring dependence:

autocorrelation at lag k ---- a measure of how far we are from the accuracy yielded by independnet draws

autocorrelation time k(g) hat: the inflation of the simulation variance due to the autocorrelation in the draws

ESS: the reduction in information from the MCMC sample relative to what would be contained in a asample of M i.i.d. draws

IF = k(g) hat

Measureing convergence: