Unable to deploy the app due to the following error

Hello,

When i am trying to deploy the app from the github folder. App is not deployed and throwing the error as specified below.

Error: [manager] Error checking Streamlit healthz: Get “http://localhost:8501/healthz”: dial tcp 127.0.0.1:8501: connect: connection refused

Note: In the github repository multiple python files exits with all the packages specified in the requirements.txt.

Hi @applicationdeveloper -

What platform are you trying to deploy from?

Best,
Randy

Hello randyzwitch,

I was trying to deploy the app (python file) from the github location but in that folder it has multiple python files.

But what platform are you having an issue, Streamlit sharing, Heroku, AWS, something else?

Streamlit sharing.

Can you provide the Streamlit sharing URL, so that I can forward it on to our engineering team?

Hi! I recently am trying to deploy a web app with pytorch as a dependency. I’m getting the same error as above through this link: https://share.streamlit.io/xmpuspus/zeroshotclassification/main/app.py. Can you please help me out too? :smiley:

Hey @xmpuspus,

Welcome to the Streamlit community as well! :partying_face: :tada: :tada: :tada:

It seems as though you are still having this error, I cannot access your app from the link you supplied! Can you also link your Github repo your deploying from so we can send both links to our engineering team?

Happy Streamlit-ing!
Marisa

Hi,

Here you go! https://github.com/xmpuspus/ZeroShotClassification

1 Like

Hey @xmpuspus!

So the engineering team looked into your problem. They saw that you are using pytorch==1.2.0, by the sounds of it to get this running properly you need pytorch==1.5.0 or higher, would you be able to upgrade your pytorch package?

ALSO, this will solve the problem to this error. However, they looked into your app’s memory usage and it will likely fail after this error is fixed because it is using too much memory. We currently give 800 MB per app. To get your app up and running (for sure) are you able to cut down on any of this memory usage?

Hope this helps,
Happy Streamlit-ing!
Marisa

Hi Marisa,

Thanks for the help! Bumped the pytorch version but since I’m using facebook’s BART model, I’m unable to downgrade the memory usage since I need that exact model for my project. Is there any way we can bump up memory provisions for just this one project? :slight_smile:

Again, appreciate your help!

Hi. I’m having the same issue. The sharing link is here: https://share.streamlit.io/aghasemi/ppngram/app.py

It has started after my update of yesterday, it seems.

Hi again. I see the problem has re-appeared. Any progress on a permanent solution?

“A proper solution” is a tough formulation here, as it sounds like the model in combination with newer Torch is just bigger than the allocated resources.

As a free service, we’re working towards making this available to the most people as we can, but in the near-term, providing more and more resources isn’t cost-effective. Hopefully this will change in the future, and hopefully ML libraries will stop continuously growing :frowning:

Thanks for your answer.

  • It used to work for a few days after the initial deployment. Have you recently modified the sharing’s quota per app?
  • What are the current resource quota per app? Can I see them somewhere?

Many thanks
Best

Here is a link to the current resource limits for Streamlit sharing:

https://docs.streamlit.io/en/stable/deploy_streamlit_app.html#resource-limits

Resource limits
You can deploy up to 3 apps per account.

Apps get up to 1 CPU, 800 MB of RAM, and 800 MB of dedicated storage in a shared execution environment.

Apps do not have access to a GPU.

Launching your app from my account works, so it’s not as simple as the libraries taking too much space. Maybe try and reboot your app?

I did and it worked. Thanks. Let’s see if it breaks again.

Great to hear!

I’ve chatted with our engineering team, it looks like we used to auto-restart apps that crashed, but then some apps got into a crash-reboot-crash cycle so we turned it off. I’m confident we’ll find a solution shortly that will make it easier for users to understand what happened with their apps and even potentially restart them on certain conditions.

Thanks for the information. It makes total sense. Later, a warning e-mail saying your app has been crashing too much would be very helpful.

Thanks again