Machine Learning

Regularization

Techniques used to prevent overfitting by adding constraints or penalties to the model during training. Regularization discourages the model from becoming too complex or fitting noise in the training data.

Why It Matters

Regularization is essential for building models that work in production. Without it, models often memorize training data and fail on real-world inputs.

Example

L2 regularization (Ridge) adding a penalty proportional to the square of weights, forcing the model to keep weights small and discouraging over-reliance on any single feature.

Think of it like...

Like putting training wheels on a bicycle — they constrain how far you can lean, preventing crashes (overfitting) while you learn to balance.

Related Terms