Please enable JavaScript.
Coggle requires JavaScript to display documents.
SMA Week9 - Coggle Diagram
SMA Week9
-
-
-
steady state simulation
terminating vs steady state
- terminating there's a natural end point - so you want to analyse events up till that end - the bank closing or the end of a year
- steady state concerns with long run equilibrium of the system, so by definition, there is no end point
- in addition, initial conditions have a heavy initial impact for terminating simulations, but it doesn't really matter for steady state
ignore the initialisation bias using the welch method - cos there might be some transient initial effect, that you need to remove to just focus on the long run conditions:
- step 1: do many simulation runs
- step 2: take average of simulation runs for a fixed time period
- step 3: using the averages, we want to plot the moving average
- step 4: subjectively define which part of the graph you want to kick out, till it flattens out
- step 5: use the formula to calculate sample mean (take the mean of all the observations after you ignore the initial l observations)
- note that the larger the window value you take for the moving average, the smoother the curve you are going to get, at the risk of losing some detail --> so you need to do moving average for different window sizes (3, 50, 500 etc)
- note that if there is a case that there might be some variance or fluctuation after some steady state is reached, you don't care, because that is already a new system that you can just ignore
- the more number of replications/simulation runs you run, the better you are able to detect the initial burn-in period
tips and tricks:
- using excel: we can use the fill function to fill a formula down a row
- using data analysis toolpack + the moving average function, we can compute moving average and plot them
problems with initialisation:
- differentiate between bias downward and bias upwards when you initially start the simulation empty and idle
-
relationship between sample mean and true mean
- the average of n diff samples, the sample mean, converges to the true mean, because of the law of large numbers E(sample mean) = E(true mean)
- Variance(sample mean) = Var(true mean)/n
- as a consequence of this, the larger the number of samples, the more the variance of the sample mean will drop