Initialisation of ML model from Huggingface leads to lost connection

My Streamlit App uses an ML model from Huggingface (germansentiment).
When I call this model for the first time, ML model files are downloaded from huggingface and be stored to a local path (on Linux system /root/.cache/, for instance).

When I work on Windows or with Docker Desktop on my Windows system everything works fine.

But now I try to run my App on a Docker container in my company. Calling the model now breaks the connection between broswer/client and the server after about 15 seconds.

Proxy is set via HTTPS_PROXY and connection the world outside in general works as I also download some data via an API from a webpage.

Does anyone have an idea what could also cause this problem?

Hey, where you able to find a solution. Facing the same issue

Well, I found a solution but I guess it is more a kind of workaround.

I downloaded the model to store it locally on the server. So as I am using a docker container I copy the model to the right path and load the model from there.