How to print lime output on python web application using streamlit?

Lime is a python library used to generate explainable/interpretable output above on model predictions.I am trying to create a data tool to explain predictions using lime algorithm.I am stuck at showing lime output on streamlit app. please let me know how to print this output.

Code:

model built and predict function is created and then,

exp = explainer.explain_instance(x_test[idx], mlp_cls.predict_proba, num_features=10)
print(โ€˜True class: %sโ€™ % x_test[idx])
exp.show_in_notebook(text=True)
Which works well in jupyter notebook.
I am trying something like this, st.write(exp.show_in_notebook(text=True))

Hi @santosh_boina,

I havenโ€™t used Lime (yet), but it looks to me like youโ€™re going to need to select a different output method for the explainer object from one of the ones listed on the lime documentation page.

It looks like doing something like st.markdown(exp.as_html(), unsafe_allow_html=True) could work.

Let us know! Thanks for experimenting with streamlit!

Hi @santosh_boina and @nthmost,

I have also raised a question about LIME TextExplainer.

The suggested exp.as_html did not work, @arraydude has kindly raised a new GitHub issue.

This is the original code for the app

import streamlit as st 
import sklearn
import sklearn.ensemble
import lime
import numpy as np
from sklearn.pipeline import make_pipeline
from lime.lime_text import LimeTextExplainer
from lime import lime_text
from matplotlib import pyplot as plt

from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian']
newsgroups_train = fetch_20newsgroups(subset='train', categories=categories)
newsgroups_test = fetch_20newsgroups(subset='test', categories=categories)
class_names = ['atheism', 'christian']

vectorizer = sklearn.feature_extraction.text.TfidfVectorizer(lowercase=False)
train_vectors = vectorizer.fit_transform(newsgroups_train.data)
test_vectors = vectorizer.transform(newsgroups_test.data)

rf = sklearn.ensemble.RandomForestClassifier(n_estimators=500)
rf.fit(train_vectors, newsgroups_train.target)

c = make_pipeline(vectorizer, rf)
explainer = LimeTextExplainer(class_names=class_names)

idx = 83
exp = explainer.explain_instance(newsgroups_test.data[idx], c.predict_proba, num_features=6)

def main():
   st.title("Newsgroup Classifier")
   st.write(f"Document id = {idx}")
   st.write(f"Probability(christian) = {c.predict_proba([newsgroups_test.data[idx]])[0,1]}")
   st.write(f"True class:  {class_names[newsgroups_test.target[idx]]}")
   exp.as_pyplot_figure()
   st.pyplot()
   plt.clf()
   st.markdown(exp.as_html(), unsafe_allow_html=True)

if __name__ == '__main__':
   main()

1 Like

Thanks a lot @iEvidently. Meanwhile , I am looking into other interpret-able ML libraries available and would try running them in streamlit. I was exploring on alternate libraries to LIME/SHAP like: anchor, fairML,interpret to know more about their text explainers and visualizations. Hope it would be helpful for you as well.

1 Like