Model Monitoring
The practice of continuously tracking an ML model's performance, predictions, and input data in production to detect degradation, drift, or anomalies after deployment.
Why It Matters
Without monitoring, models silently degrade over time as the world changes. What worked last year may be making terrible predictions today without anyone knowing.
Example
Dashboards tracking a fraud model's precision and recall daily, alerting the team when precision drops below 85% — indicating the model needs retraining.
Think of it like...
Like a patient wearing a heart monitor after surgery — continuous tracking catches problems early before they become emergencies.
Related Terms
MLOps
Machine Learning Operations — the set of practices that combine ML, DevOps, and data engineering to deploy and maintain ML models in production reliably and efficiently.
Data Drift
A change in the statistical properties of the input data over time compared to the data the model was trained on. When data drifts, model predictions become less reliable.
Concept Drift
A change in the underlying relationship between inputs and outputs over time. Unlike data drift, concept drift means the rules of the game have changed, not just the distribution of inputs.
Model Drift
The gradual degradation of a model's predictive performance over time as the real-world environment changes. Model drift can be caused by data drift, concept drift, or both.
Deployment
The process of making a trained ML model available for use in production applications. Deployment involves packaging the model, setting up serving infrastructure, and establishing monitoring.