Exception: Error detected: SavedModel file does not exist

Hello,

I’m doing Bert model to predict sentence or review if it’s Positive or Negative with Streamlit web app.

it works perfectly on local server, but when I try to deploy it with Streamlit share, I get an error :
Exception: Error detected: SavedModel file does not exist at: mypredictor/{saved_model.pbtxt|saved_model.pb}

Even though I have the 2 files required on the “mypredictor” folder in Github.

But keep in mind that one of the files is too large(1.3GB) and I uploaded it using git lfs.

here is my main.py file screenshot :

Github repo link :

Thank you :slight_smile:

Hi @ARTiSticov,

Thank you for sharing your repo. The TensorFlow docs say that "there are two formats you can use to save an entire model to disk: the TensorFlow SavedModel format , and the older Keras H5 format .

Your script expects the TensorFlow SavedModel format and thus fails to find .pb files in mypredictor/ as the directory contains .h5 files instead.

I would recommend reading the TensorFlow/Keras docs to understand how to save and load models of the same format: Save and load Keras models  |  TensorFlow Core

Best, :balloon:
Snehan

What version of ktrain are you using for development? If loading the .h5 model works locally, I suspect it could be that case that the version used locally allows for it while the one downloaded on Sharing from your requirements file does not support loading .h5 models.

Could you try pinning the version of ktrain to your requirements.txt file (replace x.x.x with the version you have locally installed):

# requirements.txt
ktrain==x.x.x

Does that work?

Best, :balloon:
Snehan

Thank you for the response,

I want to let you know that when I run the app using ‘Streamlit run main.py’ it works fine.
So I think the problem could be that Streamlit share can’t detect large files uploaded in Github using LSF (Large File Storage).

Also, I used the same file format (h5) to save a CNN model and I uploaded it just fine. But it was only file, compared to BERT model I think I have to have two files to save it.

Thanks again for trying to help.

Streamlit sharing has supported Git LFS since February of this year. I will find out from the team if there’s a max file size after which the file is not detected.

In the meantime, note that there are resource limits on sharing:

  • You can deploy up to 3 apps per account.
  • Apps get up to 1 CPU, 800 MB of RAM, and 800 MB of dedicated storage in a shared execution environment.
  • Apps do not have access to a GPU.

Large models such a BERT-base will not fit in memory. You may be able to load distilled or quantized versions of those models, depending on their size and support for inference on CPU.

I tried this but it didn’t work.

I still think that it could be Streamlit sharing can’t detect large files in github

@ARTiSticov Did your upload to Git LFS fail? I don’t see any new commits with smaller models :sweat_smile: The last model commit was 2 days ago.

I don’t think it failed, should I re-upload it ?

Also I got this message from Github :

Do you think I have to add more data ? or the file is just uploaded fine, because I can see the file size is 1.22GB which is the actual file size.

You might have uploaded some wrong files to GitHub, usually large files are included in your .gitignore, so that they don’t get uploaded to your repo.