Explainable AI
The subfield focused on making AI decision-making processes understandable to humans. XAI techniques provide insights into why a model made a specific prediction.
Why It Matters
XAI is required by regulations like the EU AI Act for high-risk applications. It builds trust, enables debugging, and ensures accountability.
Example
SHAP values showing that a loan denial was 40% driven by credit score, 30% by employment history, and 30% by debt ratio — making the reasoning transparent.
Think of it like...
Like a glass-bottom boat — you can see exactly what is happening beneath the surface, not just the destination you arrive at.
Related Terms
Explainability
The ability to understand and articulate how an AI model reaches its decisions or predictions. Explainable AI (XAI) makes the decision-making process transparent and comprehensible to humans.
Interpretability
The degree to which a human can understand the internal mechanisms and reasoning process of a machine learning model. More interpretable models allow deeper inspection of how they work.
SHAP
SHapley Additive exPlanations — a method based on game theory that explains individual predictions by calculating each feature's contribution to the prediction. SHAP values are additive and consistent.
LIME
Local Interpretable Model-agnostic Explanations — a technique that explains individual predictions by approximating the complex model locally with a simple, interpretable model.
Black Box
A model or system whose internal workings are not visible or understandable to the user — you can see the inputs and outputs but not the reasoning in between. Most deep learning models are considered black boxes.