Machine Learning

LIME

Local Interpretable Model-agnostic Explanations — a technique that explains individual predictions by approximating the complex model locally with a simple, interpretable model.

Why It Matters

LIME works with any model type (model-agnostic) and provides intuitive explanations. It is widely used when you need to explain black-box model predictions.

Example

LIME explaining a text classifier's prediction by highlighting which words most influenced the 'positive sentiment' classification of a product review.

Think of it like...

Like zooming into a small area of a complex map and drawing a simple, straight-line approximation that explains the local terrain — it is not globally accurate but locally useful.

Related Terms