Hi @Taiwo_Osunrinde
I suspect the issue is with Git LFS. When I clone your repo and checked the model file size, it shows up as only 4Kb instead of 176 MB
This might happen when a user has maxed out their Git LFS bandwidth or storage limits. Would you mind following these instructions from GitHub to view your Git LFS storage and bandwidth usage? Does it indicate that you have exceeded the default 1 GB of storage and/or bandwidth?
Solution
Take a look my fork of your app.
Since Streamlit Cloud also clones your repo, the model file size in your app’s container shows up as 4Kb. You can verify this on Streamlit Cloud by import subprocess; print(subprocess.run(['ls -la], shell=True)
.
A workaround is to download the model by making a HTTP request to the raw GitHub URL of the .h5
file, load the downloaded model into TensorFlow, and cache the model for the lifetime of the app with @st.experimental_singleton
:
Create a new function to load your model: it downloads the .h5
file, loads it into TF, and caches the model:
import urllib.request
@st.experimental_singleton
def load_model():
if not os.path.isfile('model.h5'):
urllib.request.urlretrieve('https://github.com/osunrinde/NGM-APP/raw/main/Breccia_Rock_Classifier.h5', 'model.h5')
return tensorflow.keras.models.load_model('model.h5')
Modify your prediction function to not load the model, but instead accept the model as input and return the prediction:
def Breccia_Predictions(model):
image_=pre_process()
prediction_steps_per_epoch = np.math.ceil(image_.n / image_.batch_size)
image_.reset()
Breccia_predictions = model.predict_generator(image_, steps=prediction_steps_per_epoch, verbose=1)
# model.close() # Uncommenting throws an error. You can't close a Sequential model...
predicted_classes = np.argmax(Breccia_predictions, axis=1)
return predicted_classes
And lastly, slightly modify your Predict if
block to first call load_model()
and pass the model to Breccia_Predictions(model)
:
if(st.button('Predict')):
model = load_model()
predicted=Breccia_Predictions(model)
# Rest of your code below ...
Once you make the above changes, your app should load the TensorFlow model without errors!