Streamlit HA configuration

Hi there!

I was wondering how a to best deploy streamlit in a high-availability configuration in Kubernetes.

My first thought was to simply use a horizontal pod autoscaler and a load-balancer. This would surely work, but it is ineficcient as each app instance (pod) would hold a separate copy of the cache. In addition, this may also lead to each pod showing different results as the cache may vary from pod to bod if the cache-condition is not set properly (granted, this is a separate issue, but I would like to limit this possibility where I can).

Having something like memcached/redis for a caching-backend should mitigate this and ensure a single source of truth of the cache. In addition, it also reduces the startup-time for new pods since they no longer need to regenerate the cache themselves.

Is there anyone who has attempted this or anything similar and can share some experience? How is this achieved in Streamlit Cloud?

1 Like

Getting some solutions on this would be huge. I am currently running into the issue of optimizing streamlit performance on GKE. It seems to be about twice as slow than on the cloud

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.