Overall
Ease of Use
Features
Value
Support
LIME (Local Interpretable Model-agnostic Explanations) is an influential open-source library for explaining individual predictions of any machine learning model in an interpretable and faithful manner. Developed by Marco Tulio Ribeiro and collaborators at the University of Washington and published in a seminal 2016 paper, LIME pioneered the concept of local, model-agnostic explanations and has become one of the most widely referenced techniques in the explainable AI literature. LIME's core approach works by generating perturbations of an input instance, observing how the model's prediction changes, and then fitting a simple, interpretable model (typically a sparse linear model) to these perturbations in the local neighborhood of the instance being explained. This local surrogate model approximates the complex model's behavior in the vicinity of the specific prediction, providing an explanation that is both interpretable and faithful to the original model's local decision boundary. The library supports explanations for multiple data modalities, including tabular data, text, and images. For text classification, LIME highlights which words most influenced the prediction; for image classification, it identifies which regions of the image were most important; and for tabular data, it shows which features drove the prediction and in which direction. This versatility makes LIME applicable across a wide range of ML applications. LIME's model-agnostic nature is one of its greatest strengths. Because it treats the model as a black box and only requires the ability to query the model for predictions, LIME can explain any classifier or regressor regardless of its internal architecture. This means the same explanation technique can be applied to logistic regression, random forests, neural networks, and even proprietary API-based models. The library is implemented in Python and integrates with standard ML tools. It is freely available under the BSD 2-Clause license and has accumulated over 10,000 GitHub stars. LIME has been widely adopted in both academic research and industry practice, and is frequently used alongside SHAP to provide complementary perspectives on model behavior. However, LIME explanations can sometimes be unstable, meaning that small changes to the input or the perturbation sampling can produce different explanations, which is an important consideration for critical applications.
Some links on this page may be affiliate links. This means we may earn a commission if you make a purchase, at no additional cost to you. See our affiliate disclosure. Last verified: February 2026