I have developed and deployed a Streamlit app successfully here using a fine-tuned bert-base model:
However, I only intended this model to be a ‘placeholder’ whilst I experimented with others. The model I actually want to use is a fine-tuned bert-large! But when I tweak my code so that the Streamlit app points at the larger model instead, I get an error during deployment.
The service has encountered an error while checking the health of the Streamlit app: Get “http://localhost:8501/healthz”: dial tcp 127.0.0.1:8501: connect: connection refused
I’m assuming that this must be due to the difference in model size, as this is the only thing that has changed. However, could someone confirm this, and maybe advise on what can be done to get Streamlit to load the larger model?
Thanks for posting!
It looks like your app’s memory usage is around 1.5GB, so it is likely that the size of the model is causing the issue. I’d recommend using caching as much as possible to avoid loading the model multiple times. Do you know the approximate size of the model?
Hi @Caroline, thanks for your message. So, this makes sense as the size of the model alone is about 1.35gb – which I guess already exceeds the standard allowance for Streamlit accounts?
I’m already caching the model in my code, but I guess with a model this size no amount of caching is going to help?
Is there any way to increase the memory allowance for my account? (I’m a university student working on a monkeypox misinformation classifier, if that helps in terms of it being an NFP / public interest project.)
Thank you very much again.
Shoot us an email at firstname.lastname@example.org and we’ll see what we can do. Thanks!
Thanks, @Caroline – I’ve sent an email throwing myself on your mercy
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.