My Streamlit App uses an ML model from Huggingface (germansentiment).
When I call this model for the first time, ML model files are downloaded from huggingface and be stored to a local path (on Linux system /root/.cache/, for instance).
When I work on Windows or with Docker Desktop on my Windows system everything works fine.
But now I try to run my App on a Docker container in my company. Calling the model now breaks the connection between broswer/client and the server after about 15 seconds.
Proxy is set via HTTPS_PROXY and connection the world outside in general works as I also download some data via an API from a webpage.
Does anyone have an idea what could also cause this problem?