Please enable JavaScript.
Coggle requires JavaScript to display documents.
Meyers : Stochastic loss reserving using bayesian mcmc models (Key…
Meyers :
Stochastic loss reserving using bayesian mcmc models
Validating the Mack Model
In order to test general applicability, 200 different loss rectangles were analyzed (meaning that ultimate losses were known)
For each loss rectangle, the half of the rectangle below the diagonal was replaced with Mack (chain-ladder) estimates as if the losses weren’t known. Assuming that the losses follow a lognormal distribution, the sum of each loss rectangle’s last column was plugged into the CDF of a lognormal with parameters corresponding to the mean and variance of the rectangle’s total Mack ultimate loss estimate.
The model is said to generally apply if the computed percentiles appear to be uniformly distributed .One can reject the hypothesis of uniformity via the Kolmogorov-Smirnoff
The Anderson-Darling (A-D) test is similar to the K-S test, but it is more sensitive to the fit in the extreme percentiles, but he didn't use it because every model failed it
P-P plot
we plot the percentiles with the experted percentile
See graphs
Validation of boostrap ODP Model
This model assumes that the incremental losses are overdispersed Poisson distributed
Since the ODP is defined only for nonnegative amounts, using this model usually requires using incremental paid losses (as opposed to reported)
For paid triangles, the Mack model and bootstrap ODP models tend to produce expected loss estimates that are too high
Key definition
see fomulas sheets
d
age of loss
w
Accident year
n
is the age to ultimate
B
how much is payed to date
Gamma
payement speed increase
P
estimated percentiles
f
expected percentiles
T
continious trend factor
p
correlation
Bayesian models for incurred loss data
2 ways to improve the recognition of the volatility of the istribution
1.The Mack model multiplies the age-to-age factors by the last observed loss. One can think of the last observed losses for each accident year as fixed level parameters. A model that treats the level of the accident year as random will predict more risk
2.The Mack model assumes that the loss amounts for different accident years are independent. A model that allows for correlation between accident years could increase the standard deviation of the sum of the predicted ultimate losses
2 proposed models
Leveled chain ladder Model
(CCL)
Is the same as the LCL model, except tweaks it to account for correlation between accident years
Correlated Chain-Laddrer Model
(LCL)
This model addresses downfall 1 of the Mack model addressed above
Generally speaking, the standard deviations of the predicted outcomes of the LCL and CCL models are higher than those for the Mack Model
Byesian Models for Paid loss Data
The inclusion of a payment year trend (i.e. along a diagonal, where w+d is constant) in a model has two important consequences
The model should be
based on incremental paid
loss rather than cumulative paid loss. Cumulative losses include settled claims which do not change with time
A distribution that usually works for this purpose is the skew normal distribution
With location
u
, scale
w
and shape
delta
arameters respectively, its distribution can be expressed as a sum of a truncated normal and standard normal distribution
Incremental paid loss amounts tend to be skewed to the right and are occasionally negative. We need a loss distribution that allows for these features
2 proposed methodes
The correlated Incremental trend Model
(CIT)
comparaision between CIT and CCL
The standard deviations decrease in the CCL because there is less volatility in later ages since most claims are already settled, since it was applied on cumulative losses . In the CIT model, because larger and less volatile claims are settled later in payment, the standard deviation increases with respect to loss age
In the CCL model, the autocorrelation feature was applied to the logarithm of the cumulative losses. Since there is the possibility of negative incremental losses, it was necessary to apply the autocorrelation feature after leaving the “log” space in the CIT
CIT uses incremental losses to be able to use the trend, CCL uses cumulative losses
By Setting p=0 , we eliminate between-accident-year correlation and obtain the Leveled Incremental Trend (LIT) model
The CIT and LIT models tend to overstate the estimates of the expected (paid) loss
The changing settlement rate model
(CSR)
Developed to address the possibility of a speedup of claim settlement. Can also help correcting heavy tailed models
Other topics
Data selection process
If an insurer makes significant changes in its volume of business over a 10-year period covered by Schedule P, a change in business operation could be inferred
If an insurer makes significant changes in its net to direct premium ratio over the 10-year period, a change in its reinsurance strategy could be inferred
Batesian Markov Chain Monte Carlo
(MCMC)
models
There is a certain class of Markov chains, generally called “ergodic”, that approach a limiting distribution for the vectors . That is to say that as T increases, the distribution approaches a unique limiting distribution
The Markov chains used in Bayesian MCMC analyses are members of this class
Let x be a vector of observations and let y be a vector of parameters in a model. In Bayesian MCMC analyses, the Markov chain is defined in terms of the prior distribution and the conditional distribution. The limiting distribution is the posterior distribution. That is to say, if we let the chain run long enough, the chain will randomly visit all states with a frequency that is proportional to their posterior probabilities
Explication de antoine est mieux pour validation of mack model