I want to create a web app for deep learning model. It seems that model always re-initiailze if making a prediction, taking much more time. How can I initialize the model once?
Hi @Givan
That’s a great question! The short answer: caching.
I would suggest decorating the function that loads your model with @st.experimental_singleton
. This will ensure that your model is cached after the first call to load the model, and will prevent your app from loading the same model with every widget interaction.
We’re developing new cache primitives that are easier to use and much faster than @st.cache
.
- Use
@st.experimental_singleton
to cache functions that return non-data objects like TensorFlow/Torch/Keras sessions/models and/or database connections. - Use
st.experimental_memo
to cache functions that return data like dataframe computation (pandas, numpy, etc), storing downloaded data, etc.
Read more here: Experimental cache primitives - Streamlit Docs
Here’s pseudocode to cache your ML model:
import streamlit as st
import favmodellibrary
# Decorator to cache non-data objects
@st.experimental_singleton
def load_model():
# Load large model
model = favmodellibrary.create_model()
return model
# Model is now cached
model = load_model()
# On subsequent interactions, the cached model will be used
# When the input changes, the cached model will be used
text = st.text_input("Enter some text", value="Classify this")
if text:
output = model.classify(text)
st.write("Prediction: ", output)
Does this help?
Happy Streamlit-ing!
Snehan
Great! Help me a lot.
I love streamlit.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.