How to sharing my Streamlit app with large-scale ML models to multiple users

I’d like to run my Streamlit app for multiple users so that some users can access it at the same time.

The app includes large-scale pre-trained language models and it exceeds the limit indicated in Deploy an app - Streamlit Docs, hence I plan to run the app in another way.

I’ve checked the discussion in Does streamlit is running on a single-threaded development server by default or not?, and still wonder my app using such a large model with @st.cache(suppress_st_warning=True, allow_output_mutation=True) can be run in multi-threads.

Does the cached deep learning model support access from multiple threads?

Thank you in advance.

Hey there, were you able to find solution to this problem, i too have a speech recognition Pytorch model that i have served using Streamlit i am expecting multiple users to be working on the app at the same time.

Sadly the problems comes where i cannot directly pass the online speech recorded data from an object to the Pytorch model so i write into a . mp3 file and then load that mp3 file into Pytorch.

I have no clue on how would I be able to handle multiple users running the webapp.

Do you have any recommendations and resource links to help me out i would be highly grateful :+1:

Thanks,

Every streamlit session loaded from a client is unique. I would think then when deployed Kubernetes Cluster or platform like Google Cloud Run, the app should not have an issue as it scales depending on demand.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.