Please enable JavaScript.
Coggle requires JavaScript to display documents.
Bayesian Inference (Tom Loredo) - Coggle Diagram
Bayesian Inference (Tom Loredo)
Ideas:
Data does not talk for itself!
Quantifying uncertainty with probability
Confidence intervals vs. credible intervals
Interpreting PDFs
Frequentists
Frequency of numerous events
Frequency comes from probability:
in repeated IID trials, the expected frequency will be the probability
Bayesian
probability for one event
Probability of the next trial is given
Bayesian data analysis
Using Bayesian ideas across various data analysis tasks—not just inference, but also prediction, decision, design, EDA, data reduction. . .
Motivating example: x ̄ ± σ/√N via Monte Carlo
The frequentist confidence level is a property of the procedure, not of the particular interval reported for a given dataset
He showed this throught the confidence interval calculation for a sparse data shown on page 30
The confidence interval from the frequentist's view is always a prior. It comes from the procedure. For example, we use simulated data to calculate the confidence level of a particular proposed probability function, but the data will not have any role. The confidence level is defined from the function and "the position" of the data, not anything more.
Probability theory for data analysis: Three theorems
It is very important ot write our problem in probability theory axioms
Two important theorems
Bayes's Theorem
Law of total probability (LTP)
summing over a parameter in a joint probability gives the probability over the other probability. This lets us to marginalize over the likelihoods
We call it a suit (it is exclusive and exhaustive)
This law also lets us use the denominator of Bayes's theorem as a normalization factor.
IMPORTANT: in Bayes's theorem, data change the support for a hypothesis proportional to the ability of hypothesis to predict the observed data
Inference with parametric models
Classes of problesm
Single-model inference
Like parameter estimation and prediction
Multi-model inference
model comparison/choice
model averaging
When we have uncertainties in the models but we need to calculate other things. for example, we need to take into account the systematic error, or we want to talk about a parameter without committing to one model
And also good for prediction, accounting for model uncertainty, or using all existing models
Model Checking
Summaries of posterior
Best fit values
Mode
Posterior mean
Uncertainites
Credible region
Highest Posterior Density (HPD)
Posterior standard deviation, variance, covariances
Marginal distributions
Nuisance parameters
Marginal Likelihood
Integrate likelihood over the nuisance parameter
The better way in Bayesian view
Profile Likelihood
Take the best value for the nuisance parameter and calculate the likelihood with that
But the uncertainty of the nuisance parameter as a function of the interesting parameter should be multiplied too. (slide 68-69)
Quick-looks
Keywords to Search further
Not understood