Please enable JavaScript.
Coggle requires JavaScript to display documents.
STDSR, Bandit Algorithms, Poisson Distribution
Discrete distr.
Number…
STDSR
-
-
Bayesian Statistics
Bayes Theorem: Update probabilities(H), given evidence(E)
BS takes into account any prior knowledge of the experiment that the statistician has and it is one application of a principle of
statistical inference that may be called Bayesian statistics
Example: Coin Flips
theta - prob of heads
D - prob of getting certain amount of heads and tails(h and t resp.)
1 - theta - prob ot tails
-
Frequentialist derive statistics from probabilities as frequencies of events derived from an experiment. On the other hand, Bayeian statistics makes assumptions based on prior which allows to use Bayes theorem to update posteriors that reflect both the new evidence and prior beliefs.
frequentists use methods like hypothesis testing and confidence intervals, while Bayesians use probability distributions to express uncertainty about parameters.
Theory of
Distribution
-
Central Limit Theorem
Let (X1, X2 ... Xn) be i.i.d r.v.
with PDF, mean and variance
-
Law of Large Numbers
The more sample size is, its mean is approaching true mean(mean of the population
or
As the number of trials or observations increases the actual or observed probability approaches theoretical or expected probability
-
Bandit Algorithms
trade-off between exploration (trying out new actions to discover their rewards) and exploitation (using known actions to gain maximum rewards).
Epsilon greedy
At each step, with probability ε, choose an arm randomly (exploration), and with probability 1−ϵ, choose the arm that has the highest estimated payoff (exploitation).
-
-
Poisson Distribution
- Discrete distr.
- Number of events occuring at time period
- Requires parameter lambda
- Bounded by 0 and inf
-
p exists if:
1.Markov chain is irreducible (it is possible to achieve any state from any state)
- a detailed balance condition: for all i,j:
Gibbs sampling
rather than probabilistically picking the next state all at once, you make a separate probabilistic choice for one of the d dimensions, where each choice depends on the other d − 1 dimensions.
Hamilton Mote Carlo
addresses the inefficiencies of traditional MCMC methods.
It utilizes Hamiltonian dynamics to propose new states in the Markov chain.
Converges faster
-
-
-
Exponential Distribution
- Inverse of Poisson - time between events
- Events independent
- Events occur at the same rate
-
-
- Posterior = P(H|E)
- Likelihood = P(E|H)
- Prior = P(H)
- Norm. Constant = P(E)
-
-
-
-
-
-
-
-