Streamlit cannot install CPU versions of torch

I am trying to run an app that has torch as a dependency.

The app loads correctly, but I am being unable to use the app after the app has been used once for inference. It crashes by itself.

I see the message-

[manager] Error checking Streamlit healthz: Get "http://localhost:8501/healthz": dial tcp connect: connection refused [manager] Streamlit server consistently failed status checks

This issue is already described here.

And @randyzwitch mentioned in the app that torch being a very big dependency, this problem is expected, and there was no fix suggested.

So, I tried using previous versions of torch, both with CUDA (by default) and without CUDA- the cpu versions. But with those dependencies in the requirements.txt file, Streamlit could not even start the app even once. I saw errors that were in this line-

ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 0.4.1, 0.4.1.post2, 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.3.0, 1.3.1, 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.8.0, 1.8.1, 1.9.0)
ERROR: No matching distribution found for torch==1.9.0+cpu

Apparently, this is a known issue with PyTorch, as it can be found in this Issue.

How can this problem be solved? If this problem can’t be solved, what is the best way to go forward?

NB: Here is my app-

I’m not sure “expected” is necessarily the right framing. A lot of people come in using the same code they ran on another platform and expect that it will also run on Streamlit sharing, when in the case of your last post we don’t have TPUs/GPUs available.

With PyTorch not having a CPU version, that’s their issue. One thing I would try would be seeing if you can install PyTorch using conda (this would be using an environment.yml file), as conda often has better build recipes for ML packages with lots of dependencies.


1 Like