Please enable JavaScript.
Coggle requires JavaScript to display documents.
Chapter 18. Comparing Model Pairs (18.1 Model Comparison (This “curve” is…
Chapter 18. Comparing Model Pairs
18.1 Model Comparison
To save processing time, the model selected was the best non- blender model due to the fact that blender models are generally beyond an easy explanation and take longer for model X-Ray and Feature Impacts to calculate.
The subsequent chart orders each algorithm’s predictions by probability from high to low and splits each into 15 bins,
calculating the average number of readmits (values of one) in each bin and ordering those bins from right to left on the X-axis. The Y-axis displays the accuracy of each bin.
This
“curve” is drawn beginning with a high probability distribution threshold in which almost no cases are predicted as positives, and therefore, both the true positive rate and the false positive rate are zero.
With the perfect model, at any cutoff point, one finds only true positive cases, so the “curve” travels immediately along the left bound of the chart.
The “curve” remains there until the model beginspredicting negative cases, which, in an ROC chart will start to be predicted as positives as the probability distribution threshold moves to the left.
The dual list chart is calculated by subtracting the predictions made by the two models.
when the two models disagree at the greatest magnitude on
either side of the spectrum, the ENET blender is more correct.
18.2 Prioritizing Modeling Criteria and Selecting a Model
When deciding which model to select, there are five criteria to consider. They are:
Predictive accuracy
Prediction speed.
When a patient is ready to be discharged and their data is uploaded to the model, a probability of readmission can be calculated, and that probability is then translated into a business decision.
Speed to build model.
speed to build reflects how long it takes to train a model.
This will depend on how much data the model is trained with and the complexity of an algorithm.
Familiarity with model.
Assumes that the data scientist (you) is an expert on one of the algorithms and are able to understand the exact meaning of its results.
Because most models and the algorithms that created them are too complex to make much sense of, this criterion is more relevant when someone is an expert on regression (logistic or linear).
Insights.
Based on an assumption that different
algorithms make different statistics available.
Regression is commonly used to show which features drive improved prediction of the positive result of a target against the features that drive the negative result of the target.
At this point, one of the models produced should stand out as a winner.
Within each model, there should also be a clear sense of the different measures generated through different prediction distribution thresholds and their impact on the confusion matrix.
After deciding which measure or set of measures are best for the modeling case at hand, select a model that provides the best predictive accuracy at the most affordable price and suitable run-time requirements.
The job of a data scientist is often to find out which of the criteria affect the use of the model and pick a model based on the optimal combination of criteria.
Some models must be continually retrained based on streaming data in order to be sufficiently accurate.