Confusion Matrix Metrics
The set of performance metrics derived from the confusion matrix including true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN).
Why It Matters
Understanding TP, TN, FP, and FN is essential for evaluating any classification model. Different applications require prioritizing different metrics.
Example
In a COVID test: TP = correctly detected infection, FP = healthy person told they are infected, FN = infected person told they are healthy, TN = correctly identified healthy.
Think of it like...
Like the four possible outcomes of a fire alarm: it correctly sounds for a real fire (TP), falsely sounds with no fire (FP), fails to sound during a fire (FN), or correctly stays silent (TN).
Related Terms
Confusion Matrix
A table that summarizes the performance of a classification model by showing true positives, true negatives, false positives, and false negatives. It reveals the types of errors a model makes.
Precision
Of all the items the model predicted as positive, the proportion that were actually positive. Precision measures how trustworthy the model's positive predictions are.
Recall
Of all the actually positive items in the dataset, the proportion that the model correctly identified. Recall measures how completely the model finds all relevant items.
Accuracy
The percentage of correct predictions out of all predictions made by a model. While intuitive, accuracy can be misleading for imbalanced datasets.
F1 Score
The harmonic mean of precision and recall, providing a single metric that balances both. F1 scores range from 0 to 1, with 1 being perfect precision and recall.