LIME
Local Interpretable Model-agnostic Explanations — a technique that explains individual predictions by approximating the complex model locally with a simple, interpretable model.
Why It Matters
LIME works with any model type (model-agnostic) and provides intuitive explanations. It is widely used when you need to explain black-box model predictions.
Example
LIME explaining a text classifier's prediction by highlighting which words most influenced the 'positive sentiment' classification of a product review.
Think of it like...
Like zooming into a small area of a complex map and drawing a simple, straight-line approximation that explains the local terrain — it is not globally accurate but locally useful.
Related Terms
SHAP
SHapley Additive exPlanations — a method based on game theory that explains individual predictions by calculating each feature's contribution to the prediction. SHAP values are additive and consistent.
Explainable AI
The subfield focused on making AI decision-making processes understandable to humans. XAI techniques provide insights into why a model made a specific prediction.
Interpretability
The degree to which a human can understand the internal mechanisms and reasoning process of a machine learning model. More interpretable models allow deeper inspection of how they work.