Please enable JavaScript.
Coggle requires JavaScript to display documents.
EVALUATION METHOD FOR CLASSIFICATION, Capture, Capture - Coggle Diagram
EVALUATION METHOD FOR CLASSIFICATION
UNDERFITTING AND OVERFITTING
UNDERFITTING
WHEN MODEL IS TOO SIMPLE, BOTH TRAINING AND TEST ERRORS ARE LARGE
REFERS TO A MODEL THAT CAN NEITHER MODEL THE TRAINING DATA NOT GENERALIZE TO NEW DATA
OVERFITTING DUE TO NOISE
REFERS TO A MODEL THAT MODELS THE TRAININF DATA TOO WELL
OVERFITTING DUE TO INSUFFICIENT EXAMPLES
LACK OF DATA POINTS IN THE LOWER HALF OF THE DIAGRAM MAKES IT DIFFICULT TO PREDICT CORRECTLY THE CLASS LABLES OF THAT REGION
INSUFFICIENT NUMBER OF TRAINING RECORDS IN THE REGION CAUSES THE DECISION TREE TO PREDICT THE TEST EXAMPLES USING OTHER TRAINING RECORDS THAT ARE IRRELEVANT TO THE CLASSIFICATION TASK
OVERFITTING RESULTS IN DECISION TREES THAT ARE MORE COMPLEX THAN NECESSARY
TRAINING ERROR NO LONGER PROVIDES A GOOD ESTIMATE OF HOW WELL THE TREE WILL PERFORM ON PREVIOUSLY UNSEEN RECORS
NEED NEW WAYS FOR ESTIMATING ERRORS
MODEL EVALUATION AND SELECTION
EVALUATION METRICS
USE TEST SET OF CLASS-LABELED TUPLES INSTEAD OF TRAINING SET WHEN ASSESSING ACCURACY
METHODS FOR ESTIMATING A CLASSIFIER'S ACCURACY:
HOLDOUT METHOD, RANDOM SUBSAMPLING
CROSS-VALIDATION
ROC CURVES
INCREASING THE MODEL ACCURACY
MODEL SELECTION
CLASSIFIER EVALUATION METRICS
CLASSIDIER EVALUATION METRICS : SENSITIVITY AND SPECIFICITY
CLASSIFIER EVALUATION METRICS : PRECISION AND RECALL
EVALUATING CLASSIFIER ACCURACY : HOLDOUT & CROSS - VALIDATION METHODS