Please enable JavaScript.
Coggle requires JavaScript to display documents.
Classification problems metrics - Coggle Diagram
Classification problems metrics
Accuracy
Is the
number of correct predictions
made by the model divided by the
total number of predictions
Gives how many predictions correct predictions did the model as percentage?
Example: If we run a test set with 100 images and our model
correctly predicted
80 images, then we have an accuracy of 80/100 =
0.8
Accuracy = TP + TN / (TP + FP + FN + TN) = TP + TN / Total Entries
Accuracy is god for
well balanced target classes
and performs bad for
unbalanced sets
.
Imagine we had 99 images of dogs and 1 image of a cat. If our model returns always the label dog for any input, then it would get 99% of accuracy!
Recall
Ability of a model to find all the
relevant cases
within a dataset.
Recall = TP / (TP + FN)
Gives the percentage of relevant cases detected.
Del total de ejemplos verdaderos, que porcentaje recuperé.
Precision
Ability of a classification model to identify only the relevant data points.
TP / (TP + FP)
De los que etiqueté como verdaderos, que porcentaje son realmente verdaderos.
The proportion of elements that the model says to be relevant that are actually relevant.
F1-score
Combines both precision and recall
Is the harmonic mean of precision and recall taking both metrics into account in the following equation.
F1 = 2 x (precision x recall) / (precision + recall)