I have streamlit app that I am running in streamlit cloud and it classifies submitted images using one of two possible models. The relevant code looks like this:
from keras.models import load_model
model_option = st.selectbox(
‘Select fine-tuned model’,
(‘ResNet50’, ‘VGG16’))if model_option == ‘ResNet50’:
predictor_model = load_model(‘ResNet50.model’)if model_option == ‘VGG16’:
predictor_model = load_model(‘VGG16.model’)
The two files (ResNet50.model and VGG16.model) are in my GitHub repository and are fairly large (517M and 182M respectively).
My problem is that GitHub has bandwidth usage limits which I will quickly exceed if I repeatedly run the app, since it loads a model each time.
Is there a way to use st.cache or one of the new caching decorators (st.experimental_memo or st.experimental_singleton) to avoid this problem?