Early Stopping
A regularization technique where training is halted when the model's performance on validation data stops improving, even if training loss continues to decrease. It prevents overfitting by finding the optimal training duration.
Why It Matters
Early stopping is one of the simplest and most effective regularization techniques — it automatically prevents overfitting without needing to manually choose when to stop.
Example
Monitoring validation loss during training and stopping at epoch 42 (out of planned 100) because the validation loss started increasing after epoch 42 despite training loss still decreasing.
Think of it like...
Like taking cookies out of the oven when they look perfectly golden, rather than waiting for the timer — leaving them too long means they burn (overfit).
Related Terms
Overfitting
When a model learns the training data too well — including its noise and random fluctuations — and performs poorly on new, unseen data. The model essentially memorizes rather than generalizes.
Regularization
Techniques used to prevent overfitting by adding constraints or penalties to the model during training. Regularization discourages the model from becoming too complex or fitting noise in the training data.
Validation Data
A subset of data used during training to tune hyperparameters and monitor model performance without touching the test set. It acts as an intermediate checkpoint between training and final evaluation.
Epoch
One complete pass through the entire training dataset during model training. Models typically require multiple epochs to learn effectively, with each pass refining the model's understanding.