Recall
Of all the actually positive items in the dataset, the proportion that the model correctly identified. Recall measures how completely the model finds all relevant items.
Why It Matters
High recall means the model catches most positive cases. This is critical for applications where missing a case is dangerous, like cancer screening or security threats.
Example
A cancer screening model that correctly identifies 95 out of 100 actual cancer cases — that is 95% recall, but 5 cases were missed.
Think of it like...
Like a search-and-rescue team — high recall means they find almost everyone who needs help, even if they occasionally check on people who are fine (false positives).
Related Terms
Precision
Of all the items the model predicted as positive, the proportion that were actually positive. Precision measures how trustworthy the model's positive predictions are.
Accuracy
The percentage of correct predictions out of all predictions made by a model. While intuitive, accuracy can be misleading for imbalanced datasets.
F1 Score
The harmonic mean of precision and recall, providing a single metric that balances both. F1 scores range from 0 to 1, with 1 being perfect precision and recall.
Confusion Matrix
A table that summarizes the performance of a classification model by showing true positives, true negatives, false positives, and false negatives. It reveals the types of errors a model makes.