Underfitting
When a model is too simple to capture the underlying patterns in the data, resulting in poor performance on both training and new data. The model has not learned enough from the training data.
Why It Matters
Underfitting means your model is leaving value on the table. Recognizing it early lets you invest in more sophisticated approaches before wasting time on deployment.
Example
A linear model trying to predict a complex curve — it draws a straight line through data that clearly follows a curved pattern, missing the key relationships.
Think of it like...
Like trying to summarize an entire novel in a single sentence — you lose so much nuance and detail that the summary is not useful.
Related Terms
Overfitting
When a model learns the training data too well — including its noise and random fluctuations — and performs poorly on new, unseen data. The model essentially memorizes rather than generalizes.
Bias-Variance Tradeoff
The fundamental tension in ML between a model that is too simple (high bias, underfitting) and one that is too complex (high variance, overfitting). The goal is finding the sweet spot.
Regularization
Techniques used to prevent overfitting by adding constraints or penalties to the model during training. Regularization discourages the model from becoming too complex or fitting noise in the training data.