Large file preventing streamlit deployment

I am trying to deploy an image classification model. The streamlit app runs well locally.
But when I deploy to streamlit cloud, I keep getting errors.
I tried to upload the image model to Github but Github doesn’t allow that upload because of size.

Can you advise what I need to do ? This is my first time deploying streamlit on the cloud.

see logs below:

[08:25:41] 🐍 Python dependencies were installed from /app/maid_poc/requirements.txt using pip.
Check if streamlit is installed
Streamlit is already installed
[08:25:42] 📦 Processed dependencies!

2022-08-10 08:25:56.593542: W tensorflow/stream_executor/platform/default/] Could not load dynamic library ''; dlerror: cannot open shared object file: No such file or directory
2022-08-10 08:25:56.593587: I tensorflow/stream_executor/cuda/] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-08-10 08:25:59.332 Uncaught app exception
Traceback (most recent call last):
  File "/home/appuser/venv/lib/python3.9/site-packages/streamlit/scriptrunner/", line 557, in _run_script
    exec(code, module.__dict__)
  File "", line 37, in <module>
  File "/home/appuser/venv/lib/python3.9/site-packages/streamlit/legacy_caching/", line 573, in wrapped_func
    return get_or_create_cached_value()
  File "/home/appuser/venv/lib/python3.9/site-packages/streamlit/legacy_caching/", line 557, in get_or_create_cached_value
    return_value = func(*args, **kwargs)
  File "", line 33, in load_model
  File "/home/appuser/venv/lib/python3.9/site-packages/keras/utils/", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/home/appuser/venv/lib/python3.9/site-packages/keras/saving/", line 206, in load_model
    raise IOError(f'No file or directory found at {filepath_str}')
OSError: No file or directory found at mobilemodel3.hdf5

Streamlit Cloud supports Git LFS. You could upload your model to GitHub using:

Note: Streamlit Community Cloud offers 1 GB of RAM per app. If your model occupies more memory,
you could host your model server elsewhere (AWS, GCP, etc) and query the model for inference (REST, gRPC) from your app on Streamlit Cloud.

1 Like

I tried Git LFS but it still give me error.

(Yeongnam) C:\Users\yeong\Documents\GitHub\maid_repo>git lfs install
Updated Git hooks.
Git LFS initialized.

(Yeongnam) C:\Users\yeong\Documents\GitHub\maid_repo>git lfs track “mobilemodel3.hdf5”
Tracking “mobilemodel3.hdf5”

(Yeongnam) C:\Users\yeong\Documents\GitHub\maid_repo>git add .gitattributes

(Yeongnam) C:\Users\yeong\Documents\GitHub\maid_repo>git add mobilemodel3.hdf5

(Yeongnam) C:\Users\yeong\Documents\GitHub\maid_repo>git commit -m “add model”
[master 6ae232d] add model
1 file changed, 1 insertion(+)

(Yeongnam) C:\Users\yeong\Documents\GitHub\maid_repo>git push origin main
error: src refspec main does not match any
error: failed to push some refs to ‘GitHub - yeongnamtan/maid_poc: Deployment of MAID PoC

(Yeongnam) C:\Users\yeong\Documents\GitHub\maid_repo>git push origin master
Uploading LFS objects: 100% (1/1), 523 MB | 0 B/s, done.
Enumerating objects: 18, done.
Counting objects: 100% (18/18), done.
Delta compression using up to 8 threads
Compressing objects: 100% (17/17), done.
Writing objects: 100% (18/18), 274.37 MiB | 5.44 MiB/s, done.
Total 18 (delta 4), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (4/4), done.
remote: error: Trace: ed05258158e5b7659f04315b104073f469b895336ba19597b299972fe828a8f1
remote: error: See Managing large files - GitHub Docs for more information.
remote: error: File mobilemodel3.hdf5 is 498.95 MB; this exceeds GitHub’s file size limit of 100.00 MB
remote: error: GH001: Large files detected. You may want to try Git Large File Storage -
To GitHub - yeongnamtan/maid_poc: Deployment of MAID PoC
! [remote rejected] master → master (pre-receive hook declined)
error: failed to push some refs to ‘GitHub - yeongnamtan/maid_poc: Deployment of MAID PoC

I would consult the Git LFS docs on how to use the tool and fix common errors: