Wondering if anyone in the community has had any luck with performance monitoring? We have a bunch of streamlit apps that we deploy via dokku. Some are fast, some are slow depending on what people are doing in the code. I’d like to make sure no one has stuff that takes more than a minute to load.
Has anyone had any luck tracking performance of streamlit apps? A lot of things like NewRelic track requests but with everything handled over websockets, there’s nothing to see.
The least sophisticated way I’ve tried is just using stdout and redirecting it into a log that I can parse later.
For example at the top of the streamlit app::
from datetime import datetime
start_time = datetime.now()
I was going through the same problem a couple of weeks ago.
Apparently streamlit uses prometheus_client to export metrics at /metrics if you set --global.metrics=true with streamlit run like
streamlit run --global.metrics=true my_lit_app.py
you can create a init method for your prometheus metrics eg. Gauges, Summaries etc. and then use them across your application. Just make sure that you don’t create the metrics in the main file else you will keep getting Duplicate Timeseries errors. Then you can scrape the metrics using prometheus and store it in influx or the TSDB of your choice.
I visualize the metrics on grafana, you can use prometheus dashboard or something equivalent.
EDIT: Adding example,
# put this in a module like utils.py ?
class REGISTRY:
inited = False
@classmethod
def get_metrics(cls):
if cls.inited:
return cls
else:
# Register your metrics here
cls.REQUEST_TIME = Summary('some_summary', 'Time spent in processing request')
cls.inited = True
# main script
from utils import REGISTRY
METRICS = REGISTRY.get_metrics()
@METRICS.REQUEST_TIME.time()
def i_sleep():
sleep(1.0)
st.write("something")
i_sleep()