Model Governance
The policies, processes, and tools for managing AI models throughout their lifecycle — from development through deployment to retirement. It ensures models remain compliant, fair, and performant.
Why It Matters
Model governance is increasingly required by regulation and is essential for enterprises deploying AI at scale. It prevents the 'wild west' of ungoverned model proliferation.
Example
A governance framework tracking all 200+ models in an enterprise: their owners, training data sources, bias test results, approval status, and scheduled review dates.
Think of it like...
Like fleet management for vehicles — tracking every model's status, maintenance schedule, and compliance documentation across the entire organization.
Related Terms
AI Governance
The frameworks, policies, processes, and organizational structures that guide the responsible development, deployment, and monitoring of AI systems within organizations and across society.
Model Registry
A centralized repository for storing, versioning, and managing trained ML models along with their metadata (metrics, parameters, lineage). It serves as the system of record for models.
Model Monitoring
The practice of continuously tracking an ML model's performance, predictions, and input data in production to detect degradation, drift, or anomalies after deployment.
Compliance
The process of ensuring AI systems meet regulatory requirements, industry standards, and organizational policies. AI compliance is becoming increasingly complex as regulations proliferate.
Responsible AI
An approach to developing and deploying AI that prioritizes ethical considerations, fairness, transparency, accountability, and societal benefit throughout the entire AI lifecycle.