Model Drift
The gradual degradation of a model's predictive performance over time as the real-world environment changes. Model drift can be caused by data drift, concept drift, or both.
Why It Matters
Model drift silently destroys value. A model that was 95% accurate at launch might drop to 75% in six months without anyone noticing until damage is done.
Example
A housing price prediction model becoming increasingly inaccurate as the market shifts — it was trained on pre-pandemic data but the market has fundamentally changed.
Think of it like...
Like a map that slowly becomes outdated as new roads are built and old ones are closed — it was accurate when printed but gradually loses reliability.
Related Terms
Data Drift
A change in the statistical properties of the input data over time compared to the data the model was trained on. When data drifts, model predictions become less reliable.
Concept Drift
A change in the underlying relationship between inputs and outputs over time. Unlike data drift, concept drift means the rules of the game have changed, not just the distribution of inputs.
Model Monitoring
The practice of continuously tracking an ML model's performance, predictions, and input data in production to detect degradation, drift, or anomalies after deployment.
Retraining
The process of training a model again on updated data to restore or improve its performance. Retraining addresses model drift and incorporates new patterns the original model did not learn.
MLOps
Machine Learning Operations — the set of practices that combine ML, DevOps, and data engineering to deploy and maintain ML models in production reliably and efficiently.