Please enable JavaScript.
Coggle requires JavaScript to display documents.
DL(22) - Monte Carlo Markov Chain - Coggle Diagram
DL(22) - Monte Carlo Markov Chain
sampling in
Directed graphical model
use ancetral sampling
or sample each node given its parents, moving from roots to leaves
easy task
Undirected models
use a Monte Carlo algorithm that incrementally updates samples, comes closer to sampling from the right distribution at each step
this is called
Markov Chain
more difficult, we cannot get a fair sample in one pass
Markov Chain
running the Markov Chain
: means repeatedly updating the state
x
to a value
x'
sampled from T(
x
' |
x
)
probability of a single state landing in state
x'
is given by: q(t+1)(
x'
) =
∑ₓ
q(t)(x) T(
x
' |
x
)
q(t)(x)
: state x probability
T(
x
' |
x
): moving to state
x'
probability
represent transition operator T using a stochastic matrix
A
:
Aᵢ,ᶨ
= T(
x'
= i |
x
= j)
positive integer
i
represent a state
defined by:
transition distribution T(
x
' |
x
): the probability start from
x
and go to
x'
with a random update
random state
x
Gibbs Sampling
Block GIbbs trick
: conditionally independent variables may be sampled simultaneously
Preusocode example
2) for
n
repetitions
3) sample
a
from P(a | s) and
b
from P( b | s)
1) Initialize a, s, b
4) sample s from P( s | a,b)
for each variable, randomly sample that variable given its Markov blanket (neighbors in the graph)
Problems
Generally infeasible to:
know how far a chain is from equilibrium
know whether a chain is at equilibrium
know ahead of time how long
mixing
will take
In practise
mixing
can take an infeasibly long time, especially true for
distribution with strong correlations between variables
distribution with multiple highly separated models
high-dimentional distribution
euristic approach
: just run for
n
steps, and hope for the best