Please enable JavaScript.
Coggle requires JavaScript to display documents.
Chapter 19. Interpret Model (19.5 The Partial Impact of Features on Target…
Chapter 19. Interpret Model
19.1 Feature Impacts on Target
There are four kinds of relationships that are commonly useful for exploring why a model predicts certain outcomes.
The overall impact of a feature without consideration of the impact of other
features.
The overall impact of a feature adjusted for the impact of other features.
The directional impact of a feature.
The partial impact of a feature
One of the advantages of supervised machine learning is that all relationships are measured in terms of their relationship with the target.
19.2 The Overall Impact of Features on the Target Without Consideration of other
Features
The overall impact of a feature without consideration for the impact of other features treats each feature as a standalone effect on the target.
The importance score is exceedingly useful because it allows a data scientist to focus attention on the features most likely to yield additional predictive value if misinterpreted by the AutoML, such as through misinterpretation of the variable type.
This misinterpretation would include the treating of a categorical feature as though it were a numeric feature.
Unfortunately, these scores are not fully reliable indicators of the value of a feature.
In short, while they provide a useful way to sort features, importance scores should not be relied on for feature selection and model interpretation.
19.3 The Overall Impact of a Feature Adjusted for the Impact of other Features.
Creating models with fewer features is generally a good idea to avoid overfitting, and can also reduce problems due to changes in the databases
and sources of data.
Generally, the Tree-Based Variable Importance screen in the Insights area does not demand a high degree of scrutiny.
It is useful because it is generated using minimum of processing power, but it also applies only to tree-based models and contains less accurate information than what may be retrieved from selecting Feature Impact under the very same models.
The Feature Impact pane uses information from any tree-based model to show yet another view of feature
importance.
As always, these results are best derived from the most accurate model, so go once more to the model leaderboard and search for the word “tree.”
19.4 The Directional Impact of Features on Target
The third type of relationship is what has been termed in this book the directional impact of the feature, or whether the presence of a value helps the model byassisting it in predicting readmissions or non-readmissions.
For many datasets, logistic regression does well enough that such new runs are not necessary, but, as it is not always the case, knowing how to rerun a regression model at a greater data quantity may prove valuable in future projects.
While the DataRobot Variable Effect screen does not provide all the information to recreate a model, it does provide what are commonly known as coefficients (labeled here as Effect) for the most important feature characteristics that drive a prediction decision
19.5 The Partial Impact of Features on Target
Discharged/transferred to another rehab facility including rehab units of a hospital, may offer an enticing indicator that
rehabilitation works and that patients who are sent into rehab improve enough that their stats “fool” the model.
DataRobot algorithms are set to carefully avoid overfitting (the creation of models that fit the training data well, but fail when tested against the validation and holdout sets).
Given the number of features created through one-hot encoding, it is not uncommon for all or most cases in a feature to get assigned to one of the two target values (readmitted, in this case).
DataRobot will purposefully work to avoid the possibility of a model growing too confident based on small sets of values.
DataRobot’s partial dependence plot shows the marginal effect of a value when all other features are constant.
In other words, it pretends that the value of this feature is the only known information for each patient and calculates its effect on the target as such.
When interpreting a partial dependence plot, a strong result is when the locations of the yellow dot values on the rightmost Y axis change significantly.
19.6 The Power of Language
There are four diagnosis codes containing this term:
● Diseases of tricuspid valve
● Mitral valve disorders
● Congenital pulmonary valve anomaly, unspecified
● Mitral valve stenosis and aortic valve stenosis
19.7 Hotspots
The Hotspot screen shows the most relevant (up to four) combinations of features and their effect on the target.
Think of this diagram as a set of Venn diagramswhere the largest and most overlapping hotspots are organized in the middle.
The mean relative target is the result of dividing the target value of this hotspot group by the average number of readmits and accounts for how predictive this hotspot is.
While the Hotspot panel is quite visually impressive, it is not recommended to show this particular screen during presentations due to its exceedingly high level of detail
and complexity.
19.8 Reason Codes
The reason codes are a powerful feature that can supplement business decisions.
be aware that computing reason codes is slower than computing predictions, as reason codes engage in additional evaluations of why a prediction was set as the given probability for that case.
This is, however, not likely to be a problem since reason codes are primarily of use in settings where human beings are involved in the decision process (unlike an earlier example of automated stocks trading).