'Error checking Streamlit healthz' when running inference with a tensorflow model

Hi all, having a bit of trouble with my app. I didn’t have any issues deploying the app but when it attempts to run an uploaded image through the tensorflow model I get the following error:
[manager] Error checking Streamlit healthz: Get “http://localhost:8501/healthz”: dial tcp 127.0.0.1:8501: connect: connection refused

The model is 396MB so it should not exceed the 800MB limit. It doesn’t throw any errors when run on localhost.

Streamlit Share Link: https://share.streamlit.io/jam516/socket-detection/app.py
GitHub Link: GitHub - Jam516/socket-detection

Any guidance would be greatly appreciated. Other than this I really really love how smooth and user friendly the platform is. Streamlit sharing is absolutely stunning.

1 Like

Hi @Jam516, welcome to the Streamlit community!

I cloned your repo and deployed as well and it worked fine…does the issue happen after first picture load, or does it take a while to get into the error state?

A healthz error usually means that the container crashed, which seems to happen a bit with Torch/Tensorflow/Keras due to RAM usage. But hard to say without digging in further.

Best,
Randy

Hi @randyzwitch, thanks for looking into this. The issue comes up after uploading the picture, when you click the ‘Make a prediction’ button. That’s when the app tries to run the image through the tensorflow model. What is the cap on RAM usage for Streamlit Share?

Current, it is 800MB of RAM.

Understood. I will try to lower my RAM usage and then make another attempt.

1 Like