Unable to deploy streamlit app. Cloning falied

I am trying to deploy my streamlit app to the public cloud but I get the following cloning error -

Here is my GitHub repo associated with the streamlit app I am trying to deploy - GitHub - anurag-chowdhury1975/bestsecret

I have some large model files (*.h5) that I have upload to GitHub using LFS. My stream app loads these models from my GitHub repo.I suspect that may have something to do with why the cloning is failing.

Could someone please help me fix this issue. This is my public streamlit app URL - https://img-classifier.streamlit.app

Thank you.

Thank you, @anurag_chowdhury, for your question.

I recommend checking out the insightful post by @dataprofessor on this subject. It provides detailed steps for managing Git LFS effectively, which should help troubleshoot your issue.

You can find it here: FAQ: How to use large files in your Streamlit app

Best wishes,

Thanks @Charly_Wargnier . I already followed the instructions you linked to but I am still not able to get my app to deploy for some reason. I will keep digging to see if anyone else has been able to solve this problem, Thanks

Thanks, @anurag_chowdhury.

One note – Git LFS imposes a quota on data storage and bandwidth. Exceeding this quota can inhibit further LFS downloads. Did you verify your account’s quota status?


You are right. It looks like I have exceeded my bandwidth quota in GitHub :frowning_face:.

I am either going to have to wait for 25 days till my quota is reset or I am going to have to pay for extra bandwidth. I was hoping I could switch to another Git provider such as Bitbucket but then it looks like Streamlit does not support any other provider other than GitHub :sob:

Sorry to hear, yet you could also store your large files somewhere else.

Do you have specific requirements regarding file storage?


I have stored my model files in Google drive but I do not know how to load a model in my streamlit app from gdrive. Would you have instructions for that? I can try and search how to do it. Thanks

Sure – Here’s how you can load your model from Google Drive into your app:

  1. Publicly share your model file on Google Drive and get the shareable link.
  2. Convert the shareable link to a direct download link by changing its format to https://drive.google.com/uc?id=FILE_ID.
  3. Use the following Python code in your app to download and load the model:
import requests
from io import BytesIO
from your_model_library import load_model # Update this import as needed

# Replace 'YOUR_FILE_ID_HERE' with your actual file ID
url = 'https://drive.google.com/uc?id=YOUR_FILE_ID_HERE'

def load_model_from_gdrive(url):
    response = requests.get(url)
    model_file = BytesIO(response.content)
    model = load_model(model_file)
    return model

model = load_model_from_gdrive(url)

Let me know if that suits your needs :slight_smile:


Thank you for the quick response. I will try it and let you know if it works. I appreciate all your help.

1 Like

You’re welcome. Let me know how it goes! :slight_smile:

I followed the instructions you provided to download my .h5 models from my GooglDrive (I have made these h5 model files viewable to everyone), however, I am getting the following error. I can’t seem to find a suitable solution to this error online.
Here is the link to my GitHub source file - bestsecret/src/bestsecret_app_new.py at main · anurag-chowdhury1975/bestsecret · GitHub

OSError: Unable to load model. Filepath is not an hdf5 file (or h5py is not available) or SavedModel. Received: filepath=<_io.BytesIO object at 0x2808cbe70>
File “/Applications/anaconda3/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py”, line 535, in _run_script
exec(code, module.dict)
File “/Users/chaitisen/Desktop/propulsion/bestsecret/src/bestsecret_app_new.py”, line 64, in
models = load_models()
File “/Users/chaitisen/Desktop/propulsion/bestsecret/src/bestsecret_app_new.py”, line 42, in load_models
model_bag = load_model_from_gdrive(‘https://drive.google.com/uc?id=1VSitaSvcEuzNIPI_Mb1lIk9N4hBYVBWb’)
File “/Applications/anaconda3/lib/python3.11/site-packages/streamlit/runtime/caching/cache_utils.py”, line 212, in wrapper
return cached_func(*args, **kwargs)
File “/Applications/anaconda3/lib/python3.11/site-packages/streamlit/runtime/caching/cache_utils.py”, line 241, in call
return self._get_or_create_cached_value(args, kwargs)
File “/Applications/anaconda3/lib/python3.11/site-packages/streamlit/runtime/caching/cache_utils.py”, line 268, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File “/Applications/anaconda3/lib/python3.11/site-packages/streamlit/runtime/caching/cache_utils.py”, line 324, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File “/Users/chaitisen/Desktop/propulsion/bestsecret/src/bestsecret_app_new.py”, line 27, in load_model_from_gdrive
model = load_model(model_file)
File “/Applications/anaconda3/lib/python3.11/site-packages/keras/src/saving/saving_api.py”, line 262, in load_model
return legacy_sm_saving_lib.load_model(
File “/Applications/anaconda3/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py”, line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File “/Applications/anaconda3/lib/python3.11/site-packages/keras/src/saving/legacy/save.py”, line 259, in load_model
raise IOError(

You could start by making sure the .h5 file on Google Drive isn’t corrupted. Maybe download it to your computer and give it a go with the same code to see if everything’s okay with the file itself.

Also, it might be a good idea to try loading the model directly from your local drive using the load_model function. This way, you can figure out if the quirk is really about streaming the file from Google Drive or something else with the file.

Let me know.


I was loading my models from my local drive initially and it worked fine. I even re-downloaded the models from Google drive just to make sure it wasn’t corrupted when I uploaded it the first time and I was able load the models perfectly from my local drive. This confirms that the models that are in Google drive are not corrupted.
There seems to be something else going on here…I can’t figure out what it is.

In this case, I’m not too sure, yet do you absolutely mean to have them on Google Drive?

I’m listing 7 alternatives for you below. :blush:

  1. Amazon S3: https://aws.amazon.com/s3/
  2. Microsoft Azure Blob Storage: https://azure.microsoft.com/en-us/services/storage/blobs/
  3. IBM Cloud Object Storage: https://www.ibm.com/cloud/object-storage
  4. Backblaze B2 Cloud Storage: https://www.backblaze.com/b2/cloud-storage.html
  5. DigitalOcean Spaces: https://www.digitalocean.com/products/spaces/
  6. Wasabi Hot Cloud Storage: https://wasabi.com/
  7. Oracle Cloud Infrastructure Object Storage: https://www.oracle.com/cloud/storage/object-storage.html

I’m also linking this guide we have on connecting Streamlit apps to various databases and APIs securely - you may find it useful.

I hope this helps,


Thanks. I will try these alternative storages for my models.

Sorry I forgot to respond to you earlier. I was successful in making it work by loading my .h5 model from an S3 bucket. I couldn’t load it directly as Keras.models.load_model() expects a local path to the model. I had to download the model to a local drive first and then load it via the following code -

import boto3

from tensorflow.keras.models import load_model

S3_KEY_ID = st.secrets[“AWS_ACCESS_KEY_ID”]



client_s3 = boto3.client(“s3”, region_name=S3_REGION, aws_access_key_id=S3_KEY_ID, aws_secret_access_key=S3_SECRET_KEY)

client_s3.download_file(“”,<MODEL_FILE_NAME>”, “/tmp/<MODEL_FILE_NAME>”)

model = load_model(“/tmp/<MODEL_FILE_NAME>”, custom_objects={‘imagenet_utils’: imagenet_utils})

1 Like

Thanks for the heads-up @anurag_chowdhury, and I’m glad to hear you’re moving forward with this.

Do you still need any support with anything?


No, I am good. Thank you for helping me solve my issue!

1 Like