Please enable JavaScript.
Coggle requires JavaScript to display documents.
Statistical Methods (2 Probability Distributions (Binomial…
Statistical Methods
-
3 Monte Carlo method
transformation method: r<-uniform=>x<-f(x)
equal probability: g(r)dr=f(x)dx
=> solve F(x(r))=G(r)=r for x
not unique: 1-r also <-uniform
acceptance-rejection method:
if \(\exists x_{min}, x_{max}, f_{max}\)
generate x in interval \([x_{min},x_{max}]\)
generate f in interval \([0,f_{max}]\)
if \(f(x)< f_{max}\) accept
enhanced: enclose f with g: \(g(x)\) with RV x according to \(\frac{g(x)}{\int g(x')dx'}\) can be generated
generate x
generate u from uniform \([0,g(x)]\)
=> if u<f(x) accept
application:
analytically not solvable
error propagation breaks down
many free parameters+unknown correlations between
pseudorandom:
like true random, but reproducible and repeat after period
randomness tests
- uniformity
- uncorrelated (at all levels)
test:
equidistribution
updown tests
pairwise, triplewise correlation
simulation: num calc with solution related to PD
integration: (especially higher dimensions superior)
1 Probability
-
-
Correlation
-
-
(linear) Correlation coefficient
\(\rho_{xy}=\frac{V_{xy}}{\sigma_x\sigma_y}\)
x,y independent =>\(\rho_{xy}=0\) e.g independent measurement
! not other way around: e.g. \(y=x^2\)
-
5 Parameter estimation
-
terminology
statistic: \(f(\vec x)\)
estimator
-
-
-
-
sampling distribution is PD of estimator
\(E[\theta(\vec x)]=\int\hat\theta g(\hat\theta;\theta)d\hat\theta=\int...\int\hat\theta (x)f(x_1;\theta)...f(x_n;\theta)dx_1...dx_n\) :!?:=likelihood?
-
-
1 Uncertainties
-
Central Limit Theorem
n independent continuous RV \(x_i\) with \(\mu_i\) and \(\sigma_i\)
=> \(X=\sum_{i=1}^{n}x_i\) is Gaussian RV
with \(\mu=\sum_{i=1}^{n}\mu_i\) and \(\sigma^2=\sum_{i=1}^{n}\sigma_i^2\)
exceptions:
- malicious PD (mean not well-defined)
- one dominant factor
Landau-distribution (energy loss of charged particle) because of long tail moments undefined
Breit-Wigner (Cauchy) distribution
-
-
-
10 Unfolding
Why?
compare data and theory
-usually theory->MC(detector distortion)->compare with data much simpler
+compare experiments directly
+compare theory later developed
-
\(f_{meas}(x)\rightarrow f_{true}(y)\)
\(E[n]=\nu=R\mu+\beta\)
\(n\) observed values x
\(\nu\) expectation value of x
\(\mu\) expectation value of y
\(\beta\) background
-
Multivariate Analysis
Neyman-Pearson test statistic optimal,
but \(f(x|H_{0,1})\) not always calculable: for RV \(\vec x\) (n-dim) with M bins => \(M^n\) bins total
=>idea: make reasonable Ansatz, then use MC
linear \(t(x)=\sum_{i=1}^na_ix_i\)
=>determine \(a_i\) to maximise separation
between \(g(t|H_0)\) and \(g(t|H_1)\)
Fisher separation
define separation: \(J(a)=\frac{(\tau_0-\tau_1)^2}{\Sigma_0^2+\Sigma_1^2}\)
optimal for Gaussian with common covariance
-
-