Are you running your app locally or is it deployed? LOCAL
Share the Streamlit and Python versions. Python 3.11 and Streamlit 1.31
How does streamlit handle multiple HTTP requests? In my chatbot, whenever a user queries, a POST API call is made to the cloud which generates an LLM response. This response is returned as an answer to the user.
If there are multiple users querying, would this cause a bottleneck since streamlit is waiting for each response to return before continuing?
Okay, as long as they were able to send out the requests. My fear was that the other two wouldn’t even be able to send out the request. Just waiting for the first one to finish before being able to even send the request out.