Please enable JavaScript.
Coggle requires JavaScript to display documents.
Shapland: Using ODP boostrap model (Key definitions (C cumulative losses,…
Shapland
: Using ODP boostrap model
ODP Bootstrap model
Movement towards the estimation of an unpaid claim distribution is encouraged by the following
ASOP 43 defines “actuarial central estimate” in such a way that it could include either deterministic point estimates or a first moment estimate from a distribution
All of the major rating agencies have built dynamic risk models to help their insurance rating process and welcome the input of company actuaries regarding unpaid claim distributions
Companies that use dynamic risk models to help their internal risk management processes need unpaid claim distributions
The Solvency II regime in Europe is moving many insurers towards unpaid claims distributions
International Financial Accounting Standards show actuaries that the future of insurance accounting may rely on unpaid claim distributions for booked reserves
The goal of the ODP bootstrap model is to generate a distribution of possible outcomes, rather than a point estimate, providing more information about the potential results. Currently, the vast majority of reserving are focused on deterministic point estimates.
Sampling with replacement assumes that the residuals are iid but it does not require the residuals to be normally distributed, which is often considered an advantage(find residuals standardized and then creat a residual triangle picking randomly from them)
Standardizing residuals “makes sure” that they all have the same variance
2 way for modeling an unpaid loss distribution using incurred loss data
Modelling incurred data directly and converting the ultimate values to a payment pattern
Modelling claim payments and case reserves separately
Key definitions
C
cumulative losses
q
incremental losses
w
accident year
d
development year
alpha
individual level parameter
beta
parameter that adjusts for development trend
omega
calender year trend
N
number of observation
p
number of parameter
GLM bootstrap model vs ODP bootstrap
Advantages of the GLM Bootstrap
Can account for CY effects
Can use fewer parameters to avoid over-parameterization
Advantages of the ODP Boostrap
The use of LDFs makes the model more easily explainable
Still gets a solution even when dealing with negative incremental losses
Faster to run, does not require re-solving after every iteration
The ODP Bootstrap is a particular example of a GLM bootstrap. The ODP bootstrap imposes the following restrictions to the GLM bootstrap
There is a separate parameter for each AY and development period. The GLM bootstrap can otherwise bunch parameters together
No CY parameter, The GLM bootstrap can otherwise allow for them
Link function is log-link function and the error distribution is ODP, The GLM can use any other error and link functions
Mechanisme
The GLM bootstrap fits a GLM to the incremental loss triangle to estimate expected incremental losses. For each iteration, the GLM is refit to the sample loss triangles and the parameters are used to estimate expected incremental losses for projected losses
The ODP bootstrap uses volume-weighted LDFs to calculate expected incremental losses from the original triangle. The CL method is used again on each sample loss triangle to calculated expected incremental losses for projected losses.
Practical issues
Negative incremental values
Individual negative values for incremental loss data are only an issue if the total of all incremental values in a development column is negative, as the GLM will not be able to find a solution in that case (due to the use of a log-link function)
Negative incremental values can also cause extreme outcomes
Limit incremental values to a minimum of at least zero
Remove these iterations from the simulation and replace them with new iterations. But, are must be taken under this approach to only identify extreme iterations so that the resulting distribution does not understate the probability of extreme outcomes
Recalibrate your model, Identify the source/cause of the negative losses and assess a plan of action accordingly
Non zero sum of residuals
The standardized residuals should in theory be iid with a mean of zero. Since the residuals are random observations of the true residual distribution, the average is usually non-zero. If the observed average is significantly different from zero, the fit of the model should be questioned.
Heterostedasticity : when the standardized residuals have different variances
Stratified sampling
Is accomplished by grouping those development periods with homogeneous variances and then sampling only from the residuals in each group. Although straightforward, groups with few residuals will produce outcomes with low variability, which defeats the purpose of random sampling with replacement
Calculate hetero-adjustment parameters (or variance parameters)
Group residuals like in the stratified sampling (suppose there are j groups). Calculate the variance parameter
2.Adjust the standardized residuals by multiplying them by the variance parameter
3.Generate incremental losses
Modify the formula for the scale parameter to obtain a different scale parameter for each hetero group.
Exposure adjustment
If the earned exposures exist for this data, then a useful option for the ODP bootstrap model is to divide all of the claim data by the exposures for each accident year—i.e., effectively using pure premium development instead of total loss development
Heteroecthesious data : When the underlying exposures are dissimilar
Partial first development period
first development period doesn’t have the same underlying exposure as the other periods
Partial last calendar period data
: For a deterministic analysis, it is common to exclude the last diagonal when calculating average development factors to project the future value. Instead of ignoring the last diagonal during the parameterization of the model, an alternative is to adjust or annualize the exposures in the last diagonal to make them consistent with the rest of the triangle. The fitted triangle can be calculated from this annualized triangle to obtain residuals
Tail factor
A rough rule of thumb for the tail factor standard deviation is 50% or less of the tail factor minus one (assuming the tail factor is greater than one)
Other issues
Using an n-year weighted average
Missing values
Outliers
Future research
Expand testing of the ODP bootstrap model with realistic data using the CAS loss simulation model
Research on how the adjustments to the ODP bootstrap and GLM bootstrap suggested in this monograph perform relative to realistic data—i.e., is there a significant improvement in the predictive power of the model given the different model configurations and adjustments
Expand or change the ODP bootstrap model in other ways, for example use of the Munich chain ladder or Berquist-Sherman method with an incurred/paid set of triangles, or the use of claim counts and average severities. Other examples could include the use of different residuals such as deviance or Anscombe residuals
Research the use of a Bayesian or other approach to selecting weights for different models by accident year to improve the process of combining multiple models
Research other risk analysis measures and how the ODP bootstrap model can be used for ERM (enterprise risk management)
Research how the ODP bootstrap model can be used for Solvency II requirements in Europe and the International Accounting Standards
Research into the most difficult parameter to estimate: the correlation matrix
diagnostics
Key diagnostics
To test various assumptions in the model
To gauge the quality of the model fit to the data
To help guide the adjustment of model parameters
Residual graphs
Can be done against
Development period, Accident period, Calendar period fitted incremental losses
No visible trend should be apparent .Otherwise, it is likely that a trend may not have been accounted for (dependent on the x-axis of the graph)
Residuals should appear to randomly differ from zero. Otherwise, the model is exhibiting heteroscedasticity. After adjusting for heteroscedasticity, a diagnostic test other than a residual graph will need to be used to evaluate the quality of the fit
Outliers
Can be represented graphically in a box-whisker plot. Graphic that creates a “box” whose sides are represented by the 25th and 75th percentiles, cut somewhere in between at the median (the length of the box is referred to as the inter-quartile range). Whiskers extend from out of either side of the box to map the largest and smallest value within three times the inter-quartile range
the possibility always remains that apparent outliers may actually represent realistic extreme values, which, of course, are critically important to include as part of any sound analysis
Parameter adjustment
If there is a parameter for every development and accident period and the “trend” is a relatively straight line in the residual graph, this is usually a strong indication that the model may be over-parameterized
Through trial and error, one can allow less parameters to fluctuate and let the other parameters be functions of the fluctuating parameters
Model results
The coefficient of variation of the total unpaid amount should be less than the coefficient of variation of any individual AY
Mean of unpaid loss amounts should increase with respect to AY
Standard error of unpaid amounts should increase with respect to AY
The standard error of total unpaid amounts should be greater than the standard error of any individual AY
Coefficients of variation of unpaid losses should decrease with respect to accident year. If they start to rise in more recent years, these issues may explain why:
1.With an increasing number of parameters used in the model, the parameter uncertainty tends to increase when moving from the oldest years to the more recent years.
2.The model may be overestimating the uncertainty in recent accident years if the increase is significant. In that case, another model algorithm (BF or CC) may need to be used instead of the CL method
Normality test
Although the ODP bootstrap model does not depend on the residuals being normally distributed, comparing residuals against a normal distribution remains a useful test, enabling comparison of parameter sets and gauging skewness of the residuals
AIC and BIC statistics can be used to test for fit while penalizing for added parameters
Using multiple models
Two primary methods exist for combining the results for multiple models
Run models with the same random variables. At the end, the incremental values for each model, for each iteration by accident year, can be weighted together
Run models with independent random variables. At the end, the weights are used to randomly select a model for each iteration by accident year so that the result is a weighted “mixture” of models
Correlation
Location mapping (synchronized bootstrapping) (dans l’etape de prendre des r au haard)
Each iteration will include sampling residuals for the first segment and then going back to note the location in the original residual triangle of each sampled residual
Drawbacks
1.It requires all of the business segments to use data triangles that are precisely the same size with no missing value or outliers when comparing each location of the residuals
2.The correlation of the original residuals is used in the model, and no other correlation assumptions can be used for stress testing the aggregate results
Re-sorting
To induce correlation among business segments in a bootstrap model, re-sort the residuals for each business segment until the rank correlation between each segment matches the desired correlation This can be accomplished with algorithms such as Iman-Conover or copulas
Advantages
:
The triangles for each segment may have different shapes and sizes
Different correlation assumptions may be employed
Different correlation algorithms may also have other beneficial impacts on the aggregate distribution