Hello!
I am using Streamlit for a while now and love this library, thanks.
I have an application that displays Plotly graphs with 4 filters. It is deployed in Streamlit Cloud (but could be AWS, etc.)
I have two questions related to performance:
-
I want to test UI performance with, say, 10/100/1000 users so that I could know how many servers will I need.
One idea is to test only backend treatment (Pandas), but this is not very realistic because I want the Streamlit cache to be part of the game.
I tried load testing, i.e. to send a lot of HTTP requests with Locust (I could use Gatling or Jmeter) but it is not what I want because I want users to use different filter buttons so that I could know how the cache scales.
Another idea would be to mix UI testing (Selenium) with load testing (Jmeter, etc.) which I did, but this is not very realistic because I popped all of these users on my local machine, and it is very memory consuming… One realistic way would be to perform this on the cloud with spread users, for example with Blazemeter.
Any idea on how to test UI performance with n users? -
Related to the previous question; I can’t find how the data is sent from the server to the client. In my case: I have 4 filters: if I understand well, each time a combo of these 4 filters is triggered, the data is filtered on server side (or already in cache if the combo exists) and only the data diff to be displayed in the graphs is sent to the client, in a JSON compressed format. But I can’t see this anywhere in the HTTP responses… Or maybe data is stored on the client-side also? I feel I am missing something.
How the data is sent from the server to the client?
Thanks a lot