Classification problems metrics - Coggle Diagram
Classification problems metrics
number of correct predictions
made by the model divided by the
total number of predictions
Gives how many predictions correct predictions did the model as percentage?
Example: If we run a test set with 100 images and our model
80 images, then we have an accuracy of 80/100 =
Accuracy = TP + TN / (TP + FP + FN + TN) = TP + TN / Total Entries
Accuracy is god for
well balanced target classes
and performs bad for
Imagine we had 99 images of dogs and 1 image of a cat. If our model returns always the label dog for any input, then it would get 99% of accuracy!
Ability of a model to find all the
within a dataset.
Recall = TP / (TP + FN)
Gives the percentage of relevant cases detected.
Del total de ejemplos verdaderos, que porcentaje recuperé.
Ability of a classification model to identify only the relevant data points.
TP / (TP + FP)
De los que etiqueté como verdaderos, que porcentaje son realmente verdaderos.
The proportion of elements that the model says to be relevant that are actually relevant.
Combines both precision and recall
Is the harmonic mean of precision and recall taking both metrics into account in the following equation.
F1 = 2 x (precision x recall) / (precision + recall)