Hi all - hoping someone can help me with this issue. My app worked fine when I ran it on localhost, but today when I tried to use Streamlit sharing I am having an issue.
Has this app ever worked on Streamlit sharing? We had a brief outage yesterday, which caused some people’s apps to be in an uncertain state, which is solved by rebooting the app.
If it has never worked/didn’t work on the first deployment, it could be the case that the model you are using is exhausting the available resources. Do you have an idea of how large the model you are loading is?
Hi @randyzwitch - unfortunately it has yet to ever work in Streamlit sharing (tried again just now).
I’m trying to use a pretrained, off-the-shelf word2vec model (word2vec-google-news-300), which is 1662.8MB (seems to download ok according to the logs?). Does this exhaust available resources?
I ran into this error, even with a super slim 200kb container. On digging through, it turns out that I need to either comment out headless under [server] in the config.toml file, or set it to true for the app to deploy successfully. I figured I write this here in case anyone else is scratching their heads as to why the app doesn’t deploy.
@randyzwitch - are there any plans to increase available RAM for the deployed apps or to add a commercial tier with more RAM i the future? I love Streamlit Cloud, but I am working with IFC 3D data, which is very power hungry - with all code optimization and cache I keep running against “Oh no… no memory”…