Explainability
The ability to understand and articulate how an AI model reaches its decisions or predictions. Explainable AI (XAI) makes the decision-making process transparent and comprehensible to humans.
Why It Matters
Regulations increasingly require AI explainability, especially in high-stakes decisions like healthcare, finance, and criminal justice. 'Black box' AI is becoming unacceptable.
Example
A loan denial accompanied by an explanation: 'This application was declined primarily due to a debt-to-income ratio of 45% (threshold: 40%) and limited credit history.'
Think of it like...
Like a math teacher who requires students to show their work — the answer alone is not enough, you need to demonstrate how you got there.
Related Terms
Interpretability
The degree to which a human can understand the internal mechanisms and reasoning process of a machine learning model. More interpretable models allow deeper inspection of how they work.
Transparency
The principle that AI systems should operate in a way that allows stakeholders to understand how they work, what data they use, and how decisions are made.
Black Box
A model or system whose internal workings are not visible or understandable to the user — you can see the inputs and outputs but not the reasoning in between. Most deep learning models are considered black boxes.
SHAP
SHapley Additive exPlanations — a method based on game theory that explains individual predictions by calculating each feature's contribution to the prediction. SHAP values are additive and consistent.
LIME
Local Interpretable Model-agnostic Explanations — a technique that explains individual predictions by approximating the complex model locally with a simple, interpretable model.